Remove the btsync module. Bittorrent Sync was renamed to Resilio Sync in
2016, which is supported by the resilio module. Since Resilio Sync had
some security updates since 2016, it is not safe to run Bittorrent Sync
anymore.
Partially reapplies 35af6e3605
buildPackages need to be used only for image builders.
Otherwise, the bootloader builder may be setup using the wrong arch,
rendering it unusable
mysql already has its socket path hardcoded to to
/run/mysqld/mysqld.sock.
There's not much value in making the pidDir configurable, which also
points to /run/mysqld by default.
We only seem to use `services.mysql.pidDir` in the wordpress startup
script, to wait for mysql to boot up, but we can also simply wait on the
(hardcoded) socket location too.
A much nicer way to accomplish that would be to properly describe a
dependency on mysqld.service. This however is not easily doable, due to
how the apache-httpd module was designed.
As we don't need to setup data directories from ExecStartPre= scripts
anymore, which required root, but use systemd.tmpfiles.rules instead,
everything can be run as just the mysql user.
define commands like "waiting for the mysql socket to appear" or "setup
initial databases" in a let expression, so the main control flow becomes
more readable.
We need to keep using `RuntimeDirectory=mysqld`, which translates to
`/run/mysqld`, as this is used for the location of the file socket, that
could differ with what is configured via `cfg.pidDir`.
Before, changing any peers caused the entire WireGuard interface to
be torn down and rebuilt. By configuring each peer in a separate
service we're able to only restart the affected peers.
Adding each peer individually also means individual peer
configurations can fail, but the overall interface and all other peers
will still be added.
A WireGuard peer's internal identifier is its public key. This means
it is the only reliable identifier to use for the systemd service.
5404595b55 relocated code but kept
one backslah too many, leading to
$ tmux
error creating /run/user/$(id -u)/tmux-1000 (No such file or directory)
/run/user/$UID/ is created by pam_systemd(3) which also populates
XD_RUNTIME_DIR with that value.
Alternatively, TMUX_TMPDIR might simply default to XDG_RUNTIME_DIR
without providing the same directory yet again as default string in
parameter substitution, however such behaviour change is subject to
another patch.
In fact, with `security.polkit.enable = false` systemd_logind(8) fails
to start and /run/user/$UID/ is never created for unprivileged users
in proper login sessions; XDG_RUNTIME_DIR would consequently not be
set either.
Removing the fallback to /run/user/$UID/ would have caused TMUX_TMPDIR
to be empty, which in turn would lead tmux(1) to use /tmp/. This
effectively breaks the idea of isolated sockets entirely while hiding
errors from the user.
When calling reload, bird attempts to reload the file that was given in
the command line. As the change of ${configFile} is never picked up,
bird will just reload the old file.
This way, the configuration is placed at a known location and updated.
The clickshare-csc1 package brings a udev rule file
to grant access to the ClickShare dongle if connected.
This module provides an option to install that rule file.
Only users in the "clickshare" users group have access.
We differentiate between modules and baseModules in the
VM builder for NixOS tests. This way, nesting.children, eventhough
it doesn't inherit from parent, still has enough config to
actually complete the test. Otherwise, the qemu modules
would not be loaded, for example, and a nesting.children
statement would not evaluate.
* compton-git: 5.1-rc2 -> 6.2
vsync is now a boolean option, see:
https://github.com/yshui/compton/pull/130
menu-opacity is deprecated and there's a warning that says:
Please use the wintype option `opacity` of `popup_menu` and
`dropdown_menu` instead.
* nixos/compton: Keep vSync option backwards compatible
The new upstream option tries to make the best choice for the user.
Therefore the behaviour should stay the same with this backwards
compatibility patch.
* compton-git: Remove DRM option
It's deprecated and shouldn't be used.
https://github.com/yshui/compton/pull/130/files#r285505456
* compton-git: Remove new_backends option
Was removed in "Let old/new backends co-exist"
b0c5db9f5aa500dc3568cc6fe68493df98794d4d
* compton: 0.1_beta2.5 -> 6.2
Drop the legacy, unmaintained version and use the fork for real.
Fix#61859.
Assertion fails when a Google Compute Engine image is built, because
now choices of filesystem types are restricted to `f2fs` and `ext` family if
auto-resizing is enabled.
This change will pin the filesystem used on such an image to be `ext4` for now.
A new internal option `hardware.opengl.setLdLibraryPath` is added which controls if `LD_LIBRARY_PATH` should be set to `/run/opengl-driver(-32)/lib`. It is false by default and is meant to be set to true by any driver which requires it. If this option is false, then `opengl.nix` and `xserver.nix` will not set `LD_LIBRARY_PATH`.
Currently Mesa and NVidia drivers don't set `setLdLibraryPath` because they work with libglvnd and do not override libraries, while `amdgpu-pro`, `ati` and `parallels-guest` set it to true (the former two really need it, the last one doesn't build so is presumed to).
Additionally, the `libPath` attribute within entries of `services.xserver.drivers` is removed. This made `xserver.nix` add the driver path directly to the `LD_LIBRARY_PATH` for the display manager (including X server). Not only is it redundant when the driver is added to `hardware.opengl.package` (assuming that `hardware.opengl.enable` is true), in fact all current drivers except `ati` set it incorrectly to the package path instead of package/lib.
This removal of `LD_LIBRARY_PATH` could break certain packages using CUDA, but only those that themselves load `libcuda` or other NVidia driver libraries using `dlopen` (not if they just use `cudatoolkit`). A few have already been fixed but it is practically impossible to test all because most packages using CUDA are libraries/frameworks without a simple way to test.
Fixes#11434 if only Mesa or NVidia graphics drivers are used.
Back in 2013, update-mime-database started using fdatasync() to write out
its changes after processing each file in /share/mime, with the reasoning
that a corrupted database from an interruption midway would be
problematic for applications[1]. Unfortunately, this caused a
significant regression in the time required to run update-mime-database:
commonly from under a second to half a minute or more.
This delay affects the time required to build system-path on NixOS, when
xdg.mime.enable is true (the default). For example, on one of my systems
system-path builds in ~48 seconds, 45 of which are update-mime-database.
This makes rapidly building new system configurations not fun.
This commit disables the calls to fdatasync(). update-mime-database
checks an environment variable, PKGSYSTEM_ENABLE_FSYNC, to determine
whether it should sync, and we can set this to false. system-path
already only has whatever filesystem commit guarantees that the Nix
builder provides. Furthermore, there is no risk of a failed MIME
database update messing up existing packages, because this is Nix.
(This issue was also reported at and discussed by Debian, Red Hat, and
Gentoo at least.)
[1] https://bugs.freedesktop.org/show_bug.cgi?id=70366
This is actually very useful. Allows you to test switch-to-configuration
nesting.children is still currently still broken as it will throw
away 'too much' of the config, including the modules that make
nixos tests work in the first place. But that's something for
another time.
As a oneshot service, if the startup failed it would never be attempted again.
This is problematic when peer's addresses require DNS. DNS may not be reliably available at
the time wireguard starts. Converting this to a simple service with Restart
and RestartAfter directives allows the service to be reattempted, but at
the cost of losing the oneshot semantics.
Signed-off-by: Maximilian Bosch <maximilian@mbosch.me>
Passwords should not be stored in plain text by default. On existing
installations the next time a users user accounts will automatically
be upgraded from plain to hashed one-by-one as they log in.
With `sshd -t` config validation for SSH is possible. Until now, the
config generated by Nix was applied without any validation (which is
especially a problem for advanced config like `Match` blocks).
When deploying broken ssh config with nixops to a remote machine it gets
even harder to fix the problem due to the broken ssh that makes reverts
with nixops impossible.
This change performs the validation in a Nix build environment by
creating a store path with the config and generating a mocked host key
which seems to be needed for the validation. With a broken config, the
deployment already fails during the build of the derivation.
The original attempt was done in #56345 by adding a submodule for Match
groups to make it harder screwing that up, however that made the module
far more complex and config should be described in an easier way as
described in NixOS/rfcs#42.
Some packages like `ibus-engines.typing-booster` require the dictionary
`fr_FR.dic` to provide proper support for the french language.
Until now the hunspell package set of nixpkgs didn't provide this
dictionary. It has been recommended to use `fr-moderne` as base and link
`fr_FR.dic` from it as done by other distros such as ArchLinux.
See https://github.com/NixOS/nixpkgs/issues/46940#issuecomment-423684570Fixes#46940
nixos/nextcloud: Add documentation for nextcloud app installation and updates
nixos/nextcloud: Enable autoUpdateApps in nextcloud test
nixos/nextcloud: Fix typo in nixos/modules/services/web-apps/nextcloud.xml
Co-Authored-By: Florian Klink <flokli@flokli.de>
nixos/nextcloud: Escape html in option description
nixos/nextcloud: Fix autoUpdateApps URL in documentation.
Co-Authored-By: Florian Klink <flokli@flokli.de>
I noticed xinetd process doesn't get exec'd on launch, exec here so the bash
process doesn't stick around.
Signed-off-by: William Casarin <jb55@jb55.com>
All options within geoclue.conf[0] have been made configurable.
Additonally, we can now specify whether or not GeoClue
should ask the agent to authorize an application like so:
```
services.geoclue2.appConfig."redshift" = {
isAllowed = true;
isSystem = true;
};
```
[0]: https://gitlab.freedesktop.org/geoclue/geoclue/blob/2.5.2/data/geoclue.conf.in
Co-authored-by: worldofpeace <worldofpeace@protonmail.ch>
This was a testing oversight that came from #61009 -- I forgot to test
the new traceFormat option with older server versions while I was
working on FDB 6.1.
Since trace_format is only available in 6.1+, emitting it
unconditionally caused older versions of the database fail to start,
reporting an error. We simply gate it behind a version check instead,
and assert the format is always XML on older versions. This avoids the
case where the user has an old version, changes traceFormat willingly,
and then is confused by why it didn't work.
As reported by @TimothyKlim in the comments on commit
c55b9236f0. See
c55b9236f0 (r33566132)
Signed-off-by: Austin Seipp <aseipp@pobox.com>
* Don't use `literalExample`, raw Nix values can directly be specified
as an option example which provides support for highlighting in the
manual as well.
* Escape shell args for `extraOptions`: I.e. the `-n` option might be
problematic as a longer notification command might be misinterpreted.
The module installs `zmap` globally and links the config files to
`/etc/zmap`, the default location of config files for zmap.
The package provides pretty much a sensitive default, custom configs can
be created like this:
```
{ lib, ... }:
{
environment.etc."zmap/blacklist.conf" = lib.mkForce {
text = ''
# custom zmap blacklist
0.0.0.0/0
'';
};
}
```
Quite some fixing was needed to get this to work.
Changes in VirtualBox and additions:
- VirtualBox is no longer officially supported on 32-bit hosts so i686-linux is removed from platforms
for VirtualBox and the extension pack. 32-bit additions still work.
- There was a refactoring of kernel module makefiles and two resulting bugs affected us which had to be patched.
These bugs were reported to the bug tracker (see comments near patches).
- The Qt5X11Extras makefile patch broke. Fixed it to apply again, making the libraries logic simpler
and more correct (it just uses a different base path instead of always linking to Qt5X11Extras).
- Added a patch to remove "test1" and "test2" kernel messages due to forgotten debugging code.
- virtualbox-host NixOS module: the VirtualBoxVM executable should be setuid not VirtualBox.
This matches how the official installer sets it up.
- Additions: replaced a for loop for installing kernel modules with just a "make install",
which seems to work without any of the things done in the previous code.
- Additions: The package defined buildCommand which resulted in phases not running, including RUNPATH
stripping in fixupPhase, and installPhase was defined which was not even run. Fixed this by
refactoring using phases. Had to set dontStrip otherwise binaries were broken by stripping.
The libdbus path had to be added later in fixupPhase because it is used via dlopen not directly linked.
- Additions: Added zlib and libc to patchelf, otherwise runtime library errors result from some binaries.
For some reason the missing libc only manifested itself for mount.vboxsf when included in the initrd.
Changes in nixos/tests/virtualbox:
- Update the simple-gui test to send the right keys to start the VM. With VirtualBox 5
it was enough to just send "return", but with 6 the Tools thing may be selected by
default. Send "home" to reliably select Tools, "down" to move to the VM and "return"
to start it.
- Disable the VirtualBox UART by default because it causes a crash due to a regression
in VirtualBox (specific to software virtualization and serial port usage). It can
still be enabled using an option but there is an assert that KVM nested virtualization
is enabled, which works around the problem (see below).
- Add an option to enable nested KVM virtualization, allowing VirtualBox to use hardware
virtualization. This works around the UART problem and also allows using 64-bit
guests, but requires a kernel module parameter.
- Add an option to run 64-bit guests. Tested that the tests pass with that. As mentioned
this requires KVM nested virtualization.
Currently, this uses the somewhat crude method of setting LD_PRELOAD in the
system environment. This works, but should be considered a stepping stone to
a more robust solution.
following up #59148
I forgot the default case of the architectures which do not have minor brothers whose code they can run ("westmere" or any of of AMD)
This is needed because some PostgreSQL plugins don't have a bin
directory. If only these plugins are listed in cfg.extraPlugins buildEnv
will turn $out/bin into a symbolic link to ${pg}/bin. Lateron we try to
rm $out/bin/{pg_config,postgres,pg_ctl} which will then fail because
$out/bin will be read-only.
I was pointed towards a small syntax error in the `nixpkgs.overlays`
documentation. There was a trailing semicolon after the overlay
function.
I also aligned the code a bit better so opening and closing brackets can
be visually matched much better (IMO).
https://humdi.net/vnstat/CHANGES
* enable tests
* add hardening options from upstream's
example service
* fix "documentation" setting in service:
either needs to be `unitConfig.Documentation`
(uppercase) or lowercase but not within unitConfig.
Previously, if you, for example, set
services.xserver.displayManager.sddm.enable, but forgot to set
services.xserver.enable, you would get an error message that looked like
this:
error: attribute 'display-manager' missing
Which was not particularly helpful.
Using assertions, we can make this message much better.
The type of ZNC's config option specifies that a configuration like
config.User.paul = null;
should be valid, which is useful for clearing/disabling property sets
like Users and Networks. However until now the config generator
implementation didn't actually cover null values, meaning you'd get an
error like
error: value is null while a set was expected, at /foo.nix:29:10
This fixes the implementation to correcly allow clearing of property
sets.
The kubeconfig provided to the kubernetes-control-plane-online.service
is invalid. However, the apiserver /healthz endpoint can be accessed without auth so it's
simpler to just use curl for that.
The two directories KDB and PTree do not exist before the SKS DB is
build for the first time. If /var/db/sks is empty and the module is
enabled via "services.sks.enable = true;" the following error will
occur:
...-unit-script-sks-db-pre-start[xxx]:
ln: failed to create symbolic link 'KDB/DB_CONFIG': No such file or directory
To avoid this both links have to be created after the DB is build.
Note: Creating the directories manually might be better but the initial
build might be skipped as a result:
unit-script-sks-db-pre-start[xxxxx]: KeyDB directory already exists. Exiting.
unit-script-sks-db-pre-start[xxxxx]: PTree directory already exists. Exiting.
This change was only a temporary workaround and isn't required anymore,
since /etc/systemd/system/system.slice should not be present on any
recent NixOS system (which makes this change a no-op).
This reverts commit 7098b0fcdf.
This change will load all configuration files from /etc, to make it easy
to override them, but fallback to /nix/store/.../etc/sway/config to make
Sway work out-of-the-box with the default configuration on non NixOS
systems.
Unfortunately the changes in ab5dcc7068
introduced a typo (took me a while to spot that...) that broke the
whole module (or at least the sks-db systemd unit).
The systemd unit was failing with the following error message:
...-unit-script-sks-db-pre-start[xxx]: KDB/DB_CONFIG exists but is not a symlink.
The build error has been introduced by 56dcc319cf.
Using a <simplesect/> within a <para/> is not allowed and subsequently
fails to validate while building the manual.
So instead, I moved the <simplesect/> further down and outside of the
<para/> to fix this.
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @aaronjanse, @Lassulus, @danbst
The default config of i3 provides a key binding to reload, so changes
take effect immediately:
```
bindsym $mod+Shift+c reload
```
Unfortunately the current module uses the store path of the `configFile`
option. So when I change the config in NixOS, a store path will be
created, but the current i3 process will continue to use the old one,
hence a restart of i3 is required currently.
This change links the config to `/etc/i3/config` and alters the X
startup script accordingly so after each rebuild, the config can be
reloaded.
This allows configuring IP addresses on a tinc interface using
networking.interfaces."tinc.${n}".ipv[46].addresses.
Previously, this would fail with timeouts, because of the dependency
chain
tinc.${netname}.service
--after--> network.target
--after--> network-addresses-tinc.${n}.service (and network-link-…)
--after--> sys-subsystem-net-devices-tinc.${n}.device
But the network interface doesn't exist until tinc creates it! So
systemd waits in vain for the interface to appear, and by then the
network-addresses-* and network-link-* units have failed. This leads
to the network link not being brought up and the network addresses not
being assigned, which in turn stops tinc from actually working.
cross-compilation of `btrfs-tools` is broken, and this usually needless dependency of each system closure on `btrfs-tools` prevents cross-compilation of whole system closures
Ideally, private keys never leave the host they're generated on - like
SSH. Setting generatePrivateKeyFile to true causes the PK to be
generate automatically.
Some ACME clients do not generate full.pem, which is the same as
fullchain.pem + the certificate key (key.pem), which is not necessary
for verifying OCSP staples.
I have a nixops network where I deploy containers using the `container`
backend which uses `nixos-container` intenrally to deploy several
containers to a certain host.
During that time I removed and added new containers and while trying to
deploy those to a different host I realized that it isn't guaranteed
that each container gets the same IP address which is a problem as some
parts of the deployment need to know which container is using which IP
(i.e. to configure port forwarding on the host).
With this change you can specify the container's IP like this (and don't
have to use the arbitrarily used 10.233.0.0/16 subnet):
```
$ nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
```
This is an implementation of wireguard support using wg-quick config
generation.
This seems preferrable to the existing wireguard support because
it handles many more routing and resolvconf edge cases than the
current wireguard support.
It also includes work-arounds to make key files work.
This has one quirk:
We need to set reverse path checking in the firewall to false because
it interferes with the way wg-quick sets up its routing.
This is to make sure that we get different ETag values whenever we
switch to a different store path but with the same file contents.
I've checked this against the old behaviour without the patch and it
fails as expected.
Signed-off-by: aszlig <aszlig@nix.build>
Copy-paste from iso-image.nix
Besides the simplification, it should use `pkgs.buildPackages.squashfsTools` because it is used in `nativeBuildInputs` instead of incorrect `pkgs.squashfsTools` which was forced by `import'
This makes sure that when a user hasn't set a Prometheus option it
won't show up in the prometheus.yml configuration file. This results
in smaller and easier to understand configuration files.
We previously filtered out the `_module` attribute in a NixOS
configuration by filtering it using the option's `apply` function.
This meant that every option that had a submodule type needed to have
this apply function. Adding this function is easy to forget thus this
mechanism is error prone.
We now recursively filter out the `_module` attributes at the place we
construct the Prometheus configuration file. Since we now do the filtering
centrally we don't have to do it per option making it less prone to errors.
This results in a smaller prometheus.yml config file.
It also allows us to use the same options for both prometheus-1 and
prometheus-2 since the new options for prometheus-2 default to null
and will be filtered out if they are not set.
From gkd-capability.c:
This program needs the CAP_IPC_LOCK posix capability.
We want to allow either setuid root or file system based capabilies
to work. If file system based capabilities, this is a no-op unless
the root user is running the program. In that case we just drop
capabilities down to IPC_LOCK. If we are setuid root, then change to the
invoking user retaining just the IPC_LOCK capability. The application
is aborted if for any reason we are unable to drop privileges.
Get these from upstream tox-node package instead.
This is likely to cause less maintenance overhead over time and
following upstream bootstrap node changes is automated.
This adds the following new packages:
+ elasticsearch7
+ elasticsearch7-oss
+ logstash7
+ logstash7-oss
+ kibana7
+ kibana7-oss
+ filebeat7
+ heartbeat7
+ metricbeat7
+ packetbeat7
+ journalbeat7
The default major version of the ELK stack stays at 6. We should
probably set it to 7 in a next commit.