This is essentially what's been done for the official NixOS build slaves
and I'm using it as well for a few of my machines and my own Hydra
slaves.
Here's the same implementation from the Delft server configurations:
f47c2fc7f8/delft/common.nix (L91-L101)
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
I had to make several adjustments to make it work with nixos:
* Replace relative config file lookups with ENV variable.
* Modify gitlab-shell to not clear then environment when running
pre-receive.
* Modify gitlab-shell to write some environment variables into
the .authorized_keys file to make sure gitlab-shell reads the
correct config file.
* Log unicorn output to syslog.
I tried various ways of adding a syslog package but the bundler would
not pick them up. Please fix in a better way if possible.
* Gitlab-runner program wrapper.
This is useful to run e.g. backups etc. with the correct
environment set up.
Details:
* The option `fonts.fontconfig.ultimate.enable` can be used to disable
the fontconfig-ultimate configuration.
* The user-configurable options provided by fontconfig-ultimate are
exposed in the NixOS module: `allowBitmaps` (default: true),
`allowType1` (default: false), `useEmbeddedBitmaps` (default: false),
`forceAutohint` (default: false), `renderMonoTTFAsBitmap` (default:
false).
* Upstream provides three substitution modes for substituting TrueType
fonts for Type 1 fonts (which do not render well). The default,
"free", substitutes free fonts for Type 1 fonts. The option "ms"
substitutions Microsoft fonts for Type 1 fonts. The option "combi"
uses a combination of Microsoft and free fonts. Substitutions can also
be disabled.
* All 21 of the Infinality rendering modes supported by fontconfig-ultimate
or by the original Infinality distribution can be selected through
`fonts.fontconfig.ultimate.rendering`. The default is the medium style
provided by fontconfig-ultimate. Any of the modes may be customized,
or Infinality rendering can be disabled entirely.
'torify' now ships with the tor bundle itself; and using torsocks is
recommended over tsocks (torify will use torsocks automatically.)
Signed-off-by: Austin Seipp <aseipp@pobox.com>
We will simply rename the previous module and add a warning whenever the
module is included directly, pointing the user to the right option and
also enable it as well (in case somebody has missed the option and is
wondering why VirtualBox doesn't work anymore).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The dnscrypt-proxy service relays regular DNS queries to
a DNSCrypt enabled upstream resolver.
The traffic between the client and the upstream resolver is
encrypted and authenticated, which may mitigate the risk of
MITM attacks and third-party snooping (assuming a trustworthy
upstream).
Though dnscrypt-proxy can run as a standalone DNS client,
the recommended setup is to use it as a forwarder for a
caching DNS client.
To use dnscrypt-proxy as a forwarder for dnsmasq, do
```nix
{
# ...
networking.nameservers = [ "127.0.0.1" ];
networking.dhcpcd.extraConfig = "nohook resolv.conf";
services.dnscrypt-proxy.enable = true;
services.dnscrypt-proxy.localAddress = "127.0.0.1";
services.dnscrypt-proxy.port = 40;
services.dnsmasq.enable = true;
services.dnsmasq.extraConfig = ''
no-resolv
server=127.0.0.1#40
listen-address=127.0.0.1
'';
# ...
}
```
I'm not using JFS, but this is to mainly make jfsutils available if you
have defined a JFS filesystem in your configuration.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is proprietary software, and NixOS is intended as a free software
distribution. We currently don't have a mechanism like allowUnfree for
NixOS modules, so it's better to leave out modules for such
packages. Of couse, they can still be activated by doing:
imports = [ <nixpkgs/nixos/services/networking/copy-com.nix ];
This conflicts with the existing reference NTP daemon, so we're using
services.ntp.enable = mkForce false here to make sure both services
aren't enabled in par.
I was already trying to merge the module with services.ntp, but it would
have been quite a mess with a bunch of conditions on the package name.
They both have a bit in common if it comes to the configuration files,
but differ in handling of the state dir (for example, OpenNTPd doesn't
allow it to be owned by anything other than root).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now that the fail2ban service has the ".enable" option, I think it's
time to add it to the module list, so that we can enable it in
configuration.nix like this:
services.fail2ban.enable = true;
/tmp cleaning is done by systemd rather than stage-2-init
enableEmergencyMode moved from systemd to seperate module
new option to mount tmp on tmpfs
new option to enable additional units shipped with systemd
This version of module has disabled socketActivation, because until
nixos upgrade systemd to at least 214, systemd does not support
SocketGroup. So socket is created with "root" group when
socketActivation enabled. Should be fixed as soon as systemd upgraded.
Includes changes from #3015 and supersedes #3028
- Upgrade Nagios Core to 4.x
- Expose mainConfigFile and cgiConfigFile in module for finer
configuration control.
- Upgrade Plugins to 2.x
- Remove default objectDefs, which users probably want to customize.
- Systemd-ify Nagios module and simplify directory structure
- Upgrade Nagios package with more modern patch, and ensure the
statedir is set to /var/lib/nagios
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This allows you to use the Linux kernel's built-in compressed memory as
swap space functionality.
It is recommended to enable only for kernel 3.14 (which is when zram came out of
the staging drivers area) or higher.
Previously all card-specific stuff was scattered across xserver.nix
and opengl.nix, which is ugly. Now it can be kept together in a single
card-specific module. This required the addition of a few internal
options:
- services.xserver.drivers: A list of { name, driverName, modules,
libPath } sets.
- hardware.opengl.package: The OpenGL implementation. Note that there
can be only one OpenGL implementation at a time in a system
configuration (i.e. no dynamic detection).
- hardware.opengl.package32: The 32-bit OpenGL implementation.
This module implements a significant refactoring in grsecurity
configuration for NixOS, making it far more usable by default and much
easier to configure.
- New security.grsecurity NixOS attributes.
- All grsec kernels supported
- Allows default 'auto' grsec configuration, or custom config
- Supports custom kernel options through kernelExtraConfig
- Defaults to high-security - user must choose kernel, server/desktop
mode, and any virtualisation software. That's all.
- kptr_restrict is fixed under grsecurity (it's unwriteable)
- grsecurity patch creation is now significantly abstracted
- only need revision, version, and SHA1
- kernel version requirements are asserted for sanity
- built kernels can have the uname specify the exact grsec version
for development or bug reports. Off by default (requires
`security.grsecurity.config.verboseVersion = true;`)
- grsecurity sysctl support
- By default, disabled.
- For people who enable it, NixOS deploys a 'grsec-lock' systemd
service which runs at startup. You are expected to configure sysctl
through NixOS like you regularly would, which will occur before the
service is started. As a result, changing sysctl settings requires
a reboot.
- New default group: 'grsecurity'
- Root is a member by default
- GRKERNSEC_PROC_GID is implicitly set to the 'grsecurity' GID,
making it possible to easily add users to this group for /proc
access
- AppArmor is now automatically enabled where it wasn't before, despite
implying features.apparmor = true
The most trivial example of enabling grsecurity in your kernel is by
specifying:
security.grsecurity.enable = true;
security.grsecurity.testing = true; # testing 3.13 kernel
security.grsecurity.config.system = "desktop"; # or "server"
This specifies absolutely no virtualisation support. In general, you
probably at least want KVM host support, which is a little more work.
So:
security.grsecurity.enable = true;
security.grsecurity.stable = true; # enable stable 3.2 kernel
security.grsecurity.config = {
system = "server";
priority = "security";
virtualisationConfig = "host";
virtualisationSoftware = "kvm";
hardwareVirtualisation = true;
}
This module has primarily been tested on Hetzner EX40 & VQ7 servers
using NixOps.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This module adds the security.duosec attributes, which you can use to
enable simple two-factor authentication for NixOS logins.
The module currently provides PAM and SSH support, although the PAM unix
system configuration isn't automatically dealt with (although the
configuration is automatically built).
Enabling it is as easy as saying:
security.duosec.ssh.enable = true;
security.duosec.ikey = "XXXXXXXX...";
security.duosec.skey = "XXXXXXXX...";
security.duosec.host = "api-XXXXXXX.duosecurity.com";
security.duosec.group = "duosec";
which will enforce two-factor authentication for SSH logins for users in
the 'duosec' group.
This requires uid/gid support in the environment.etc module.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Uses standard NixOS user config merging.
Work in progress: The slave config does not actually start the slave agent. This just configures a
jenkins user if required. Bare minimum to enable a nice jenkins SSH slave.
By default the jenkins server is executed under the user "jenkins". Which can be configured using
users.jenkins.* options. If a different user is requested by changing services.jenkins.user then
none of the users.jenkins options apply.
This patch does not include jenkins slave configuration. Some config options will probably change
when this is implemented.
Aspects like the user and environment are typically identical between slave and master. The service
configs are different. The design is for users.jenkins to cover the shared aspects while
services.jenkins and services.jenkins-slave cover the master and slave specific aspects,
respectively.
Another option would be to place everything under services.jenkins and have a config that selects
master vs slave.
* Bump bumblebee to 3.2.1
* Remove config.patch - options it added can be passed to ./configure now
* Remove the provided xorg.conf
Provided xorg.conf was causing problems for some users,
and Bumblebee provides its own default configuration anyway.
* Make secondary X11 log to /var/log/X.bumblebee.log
* Add a module for bumblebee
Includes configuration option for the threshold beneath which to refill
the entropy pool - defaults to 1024 bits as this is the number used in
other distro's existing service files I looked at.
With kmscon, it is now possible to have a system without X that still
needs the mesa setup in /run/opengl-driver
Signed-off-by: Shea Levy <shea@shealevy.com>
This required some changes to systemd unit handling:
* Add an option to specify that a unit is just a symlink
* Allow specified units to overwrite systemd-provided ones
* Have gettys.target require autovt@1.service instead of getty@1.service
Signed-off-by: Shea Levy <shea@shealevy.com>
ntopng is a high-speed web-based traffic analysis and flow collection
tool. Enable it by adding this to configuration.nix:
services.ntopng.enable = true;
Open a browser at http://localhost:3000 and login with the default
username/password: admin/admin.
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.
The attribute ‘config.systemd.services.<service-name>.runner’
generates a script that runs the service outside of systemd. This is
useful for testing, and also allows NixOS services to be used outside
of NixOS. For instance, given a configuration file foo.nix:
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql92;
services.postgresql.dataDir = "/tmp/postgres";
}
you can build and run PostgreSQL as follows:
$ nix-build -A config.systemd.services.postgresql.runner -I nixos-config=./foo.nix
$ ./result
This will run the service's ExecStartPre, ExecStart, ExecStartPost and
ExecStopPost commands in an appropriate environment. It doesn't work
well yet for "forking" services, since it can't track the main
process. It also doesn't work for services that assume they're always
executed by root.