It's only needed during early boot (in fact, it's probably not needed
at all on NixOS). Restarting it is expensive because it does a sync()
of the root file system.
systemd escaping rules translate this into a string containing '\'
which is treated by some code paths as quoted, and by others as unquoted
causing the affected units to fail.
Restarting user@ instances is bad because it causes all user services
(such as ssh-agent.service) to be restarted. Maybe one day we can have
switch-to-configuration restart user units in a fine-grained way, but
for now we should just ignore user systemd instances.
Backport: 14.04
This reverts commit 6eaced3582. Doesn't
work very well, e.g. if you actually have the FUSE module loaded. And
in any case it's already fixed in NixOps.
These fail to mount if you don't have the appropriate kernel support,
and this confuses NixOps' ‘check’ command. We should teach NixOps not
to complain about non-essential mount points, but in the meantime it's
better to turn them off.
With ‘systemd.user.units’ and ‘systemd.user.services’, you can specify
units used by per-user systemd instances. For example,
systemd.user.services.foo =
{ description = "foo";
wantedBy = [ "default.target" ];
serviceConfig.ExecStart = "${pkgs.foo}/bin/foo";
};
declares a unit ‘foo.service’ that gets started automatically when the
user systemd instance starts, and is stopped when the user systemd
instance stops.
Note that there is at most one systemd instance per user: it's created
when a user logs in and there is no systemd instance for that user
yet, and it's removed when the user fully logs out (i.e. has no
sessions anymore). So if you're simultaneously logged in via X11 and a
virtual console, you get only one copy of foo.
If you define a unit, and either systemd or a package in
systemd.packages already provides that unit, then we now generate a
file /etc/systemd/system/<unit>.d/overrides.conf. This makes it
possible to use upstream units, while allowing them to be customised
from the NixOS configuration. For instance, the module nix-daemon.nix
now uses the units provided by the Nix package. And all unit
definitions that duplicated upstream systemd units are finally gone.
This makes the baseUnit option unnecessary, so I've removed it.
This allows specifying rules for systemd-tmpfiles.
Also, enable systemd-tmpfiles-clean.timer so that stuff is cleaned up
automatically 15 minutes after boot and every day, *if* you have the
appropriate cleanup rules (which we don't have by default).
This prevents insidious errors once systemd begins handling the unit. If
the unit is loaded at boot, any errors of this nature are logged to the
console before the journal service is running. This makes it very hard
to diagnose the issue. Therefore, this assertion helps guarantee the
mistake is not made.
Note that systemd no longer depends on dbus, so we're rid of the
cyclic dependency problem between systemd and dbus.
This commit incorporates from wkennington's systemd branch
(203dcff45002a63f6be75c65f1017021318cc839,
1f842558a95947261ece66f707bfa24faf5a9d88).
Using pkgs.lib on the spine of module evaluation is problematic
because the pkgs argument depends on the result of module
evaluation. To prevent an infinite recursion, pkgs and some of the
modules are evaluated twice, which is inefficient. Using ‘with lib’
prevents this problem.
This allows to define systemd.path(5) units, for example like this:
{
systemd = let
description = "Set Key Permissions for xyz.key";
in {
paths.set-key-perms = {
inherit description;
before = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
pathConfig.PathChanged = "/run/keys/xyz.key";
};
services.set-key-perms = {
inherit description;
serviceConfig.Type = "oneshot";
script = "chown myspecialkeyuser /run/keys/xyz.key";
};
};
}
The example here is actually useful in order to set permissions for the
NixOps keys target to ensure those permisisons aren't reset whenever the
key file is reuploaded.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
You can now say:
systemd.services.foo.baseUnit = "${pkgs.foo}/.../foo.service";
This will cause NixOS' generated foo.service file to include
foo.service from the foo package. You can then apply local
customization in the usual way:
systemd.services.foo.serviceConfig.MemoryLimit = "512M";
Note however that overriding options in the original unit may not
work. For instance, you cannot override ExecStart.
It's also possible to customize instances of template units:
systemd.services."getty@tty4" =
{ baseUnit = "/etc/systemd/system/getty@.service";
serviceConfig.MemoryLimit = "512M";
};
This replaces the unit options linkTarget (which didn't allow
customisation) and extraConfig (which did allow customisation, but in
a non-standard way).
This will allow overriding package-provided units, or overriding only a
specific instance of a unit template.
Signed-off-by: Shea Levy <shea@shealevy.com>
This required some changes to systemd unit handling:
* Add an option to specify that a unit is just a symlink
* Allow specified units to overwrite systemd-provided ones
* Have gettys.target require autovt@1.service instead of getty@1.service
Signed-off-by: Shea Levy <shea@shealevy.com>
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.