Merge remote-tracking branch 'origin/master' into gcc-6

This commit is contained in:
Eelco Dolstra 2017-07-21 11:05:58 +02:00
commit a13802b2c8
No known key found for this signature in database
GPG Key ID: 8170B4726D7198DE
1461 changed files with 36920 additions and 23576 deletions

View File

@ -11,6 +11,7 @@
- [ ] NixOS
- [ ] macOS
- [ ] Linux
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

View File

@ -38,5 +38,5 @@ For pull-requests, please rebase onto nixpkgs `master`.
Communication:
* [Mailing list](http://lists.science.uu.nl/mailman/listinfo/nix-dev)
* [Mailing list](https://groups.google.com/forum/#!forum/nix-devel)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)

View File

@ -243,5 +243,218 @@ set of packages.
</section>
<section xml:id="sec-declarative-package-management">
<title>Declarative Package Management</title>
<section xml:id="sec-building-environment">
<title>Build an environment</title>
<para>
Using <literal>packageOverrides</literal>, it is possible to manage
packages declaratively. This means that we can list all of our desired
packages within a declarative Nix expression. For example, to have
<literal>aspell</literal>, <literal>bc</literal>,
<literal>ffmpeg</literal>, <literal>coreutils</literal>,
<literal>gdb</literal>, <literal>nixUnstable</literal>,
<literal>emscripten</literal>, <literal>jq</literal>,
<literal>nox</literal>, and <literal>silver-searcher</literal>, we could
use the following in <filename>~/.config/nixpkgs/config.nix</filename>:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils gdb ffmpeg nixUnstable emscripten jq nox silver-searcher ];
};
};
}
</screen>
<para>
To install it into our environment, you can just run <literal>nix-env -iA
nixpkgs.myPackages</literal>. If you want to load the packages to be built
from a working copy of <literal>nixpkgs</literal> you just run
<literal>nix-env -f. -iA myPackages</literal>. To explore what's been
installed, just look through <filename>~/.nix-profile/</filename>. You can
see that a lot of stuff has been installed. Some of this stuff is useful
some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils gdb ffmpeg nixUnstable emscripten jq nox silver-searcher ];
pathsToLink = [ "/share" "/bin" ];
};
};
}
</screen>
<para>
<literal>pathsToLink</literal> tells Nixpkgs to only link the paths listed
which gets rid of the extra stuff in the profile.
<filename>/bin</filename> and <filename>/share</filename> are good
defaults for a user environment, getting rid of the clutter. If you are
running on Nix on MacOS, you may want to add another path as well,
<filename>/Applications</filename>, that makes GUI apps available.
</para>
</section>
<section xml:id="sec-getting-documentation">
<title>Getting documentation</title>
<para>
After building that new environment, look through
<filename>~/.nix-profile</filename> to make sure everything is there that
we wanted. Discerning readers will note that some files are missing. Look
inside <filename>~/.nix-profile/share/man/man1/</filename> to verify this.
There are no man pages for any of the Nix tools! This is because some
packages like Nix have multiple outputs for things like documentation (see
section 4). Let's make Nix install those as well.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils ffmpeg nixUnstable emscripten jq nox silver-searcher ];
pathsToLink = [ "/share/man" "/share/doc" /bin" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
This provides us with some useful documentation for using our packages.
However, if we actually want those manpages to be detected by man, we need
to set up our environment. This can also be managed within Nix
expressions.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" /bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
For this to work fully, you must also have this script sourced when you
are logged in. Try adding something like this to your
<filename>~/.profile</filename> file:
</para>
<screen>
#!/bin/sh
if [ -d $HOME/.nix-profile/etc/profile.d ]; then
for i in $HOME/.nix-profile/etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
fi
</screen>
<para>
Now just run <literal>source $HOME/.profile</literal> and you can starting
loading man pages from your environent.
</para>
</section>
<section xml:id="sec-gnu-info-setup">
<title>GNU info setup</title>
<para>
Configuring GNU info is a little bit trickier than man pages. To work
correctly, info needs a database to be generated. This can be done with
some small modifications to our environment scripts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
export INFOPATH=$HOME/.nix-profile/share/info:/nix/var/nix/profiles/default/share/info:/usr/share/info
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
texinfoInteractive
];
pathsToLink = [ "/share/man" "/share/doc" "/share/info" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" "info" ];
postBuild = ''
if [ -x $out/bin/install-info -a -w $out/share/info ]; then
shopt -s nullglob
for i in $out/share/info/*.info $out/share/info/*.info.gz; do
$out/bin/install-info $i $out/share/info/dir
done
fi
'';
};
};
}
</screen>
<para>
<literal>postBuild</literal> tells Nixpkgs to run a command after building
the environment. In this case, <literal>install-info</literal> adds the
installed info pages to <literal>dir</literal> which is GNU info's default
root node. Note that <literal>texinfoInteractive</literal> is added to the
environment to give the <literal>install-info</literal> command.
</para>
</section>
</section>
</chapter>

View File

@ -37,8 +37,9 @@
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All three are always defined at the top level, so one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...</programlisting>
All three are always defined as attributes in the standard environment, and at the top level. That means one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...buildPlatform...</programlisting>, or just off <varname>stdenv</varname>:
<programlisting>{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform...</programlisting>.
</para>
<variablelist>
<varlistentry>
@ -79,11 +80,6 @@
</listitem>
</varlistentry>
</variablelist>
<note><para>
If you dig around nixpkgs, you may notice there is also <varname>stdenv.cross</varname>.
This field defined as <varname>hostPlatform</varname> when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
</para></note>
<para>
The exact schema these fields follow is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up.
You can see examples of ones used in practice in <literal>lib.systems.examples</literal>; note how they are not all very consistent.

View File

@ -26,7 +26,7 @@ pkgs.stdenv.mkDerivation {
extraHeader = ''xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" '';
in ''
{
pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--chapters"} \
pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--top-level-division=chapter"} \
--smart \
| sed -e 's|<ulink url=|<link xlink:href=|' \
-e 's|</ulink>|</link>|' \

View File

@ -2,60 +2,120 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-beam">
<title>Beam Languages (Erlang &amp; Elixir)</title>
<title>BEAM Languages (Erlang, Elixir &amp; LFE)</title>
<section xml:id="beam-introduction">
<title>Introduction</title>
<para>
In this document and related Nix expressions we use the term
<emphasis>Beam</emphasis> to describe the environment. Beam is
the name of the Erlang Virtial Machine and, as far as we know,
from a packaging perspective all languages that run on Beam are
interchangable. The things that do change, like the build
system, are transperant to the users of the package. So we make
no distinction.
In this document and related Nix expressions, we use the term,
<emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name
of the Erlang Virtual Machine and, as far as we're concerned, from a
packaging perspective, all languages that run on the BEAM are
interchangeable. That which varies, like the build system, is transparent
to users of any given BEAM package, so we make no distinction.
</para>
</section>
<section xml:id="build-tools">
<section xml:id="beam-structure">
<title>Structure</title>
<para>
All BEAM-related expressions are available via the top-level
<literal>beam</literal> attribute, which includes:
</para>
<itemizedlist>
<listitem>
<para>
<literal>interpreters</literal>: a set of compilers running on the
BEAM, including multiple Erlang/OTP versions
(<literal>beam.interpreters.erlangR19</literal>, etc), Elixir
(<literal>beam.interpreters.elixir</literal>) and LFE
(<literal>beam.interpreters.lfe</literal>).
</para>
</listitem>
<listitem>
<para>
<literal>packages</literal>: a set of package sets, each compiled with
a specific Erlang/OTP version, e.g.
<literal>beam.packages.erlangR19</literal>.
</para>
</listitem>
</itemizedlist>
<para>
The default Erlang compiler, defined by
<literal>beam.interpreters.erlang</literal>, is aliased as
<literal>erlang</literal>. The default BEAM package set is defined by
<literal>beam.packages.erlang</literal> and aliased at the top level as
<literal>beamPackages</literal>.
</para>
<para>
To create a package set built with a custom Erlang version, use the
lambda, <literal>beam.packagesWith</literal>, which accepts an Erlang/OTP
derivation and produces a package set similar to
<literal>beam.packages.erlang</literal>.
</para>
<para>
Many Erlang/OTP distributions available in
<literal>beam.interpreters</literal> have versions with ODBC and/or Java
enabled. For example, there's
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which
corresponds to <literal>beam.interpreters.erlangR19</literal>.
</para>
<para xml:id="erlang-call-package">
We also provide the lambda,
<literal>beam.packages.erlang.callPackage</literal>, which simplifies
writing BEAM package definitions by injecting all packages from
<literal>beam.packages.erlang</literal> into the top-level context.
</para>
</section>
<section xml:id="build-tools">
<title>Build Tools</title>
<section xml:id="build-tools-rebar3">
<title>Rebar3</title>
<para>
By default Rebar3 wants to manage it's own dependencies. In the
normal non-Nix, this is perfectly acceptable. In the Nix world it
is not. To support this we have created two versions of rebar3,
<literal>rebar3</literal> and <literal>rebar3-open</literal>. The
<literal>rebar3</literal> version has been patched to remove the
ability to download anything from it. If you are not running it a
nix-shell or a nix-build then its probably not going to work for
you. <literal>rebar3-open</literal> is the normal, un-modified
rebar3. It should work exactly as would any other version of
rebar3. Any Erlang package should rely on
<literal>rebar3</literal> and thats really what you should be
using too.
By default, Rebar3 wants to manage its own dependencies. This is perfectly
acceptable in the normal, non-Nix setup, but in the Nix world, it is not.
To rectify this, we provide two versions of Rebar3:
<itemizedlist>
<listitem>
<para>
<literal>rebar3</literal>: patched to remove the ability to download
anything. When not running it via <literal>nix-shell</literal> or
<literal>nix-build</literal>, it's probably not going to work as
desired.
</para>
</listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It
should work exactly as would any other version of Rebar3. Any Erlang
package should rely on <literal>rebar3</literal> instead. See <xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="build-tools-other">
<title>Mix &amp; Erlang.mk</title>
<para>
Both Mix and Erlang.mk work exactly as you would expect. There
is a bootstrap process that needs to be run for both of
them. However, that is supported by the
<literal>buildMix</literal> and <literal>buildErlangMk</literal> derivations.
Both Mix and Erlang.mk work exactly as expected. There is a bootstrap
process that needs to be run for both, however, which is supported by the
<literal>buildMix</literal> and <literal>buildErlangMk</literal>
derivations, respectively.
</para>
</section>
</section>
<section xml:id="how-to-install-beam-packages">
<title>How to install Beam packages</title>
<title>How to Install BEAM Packages</title>
<para>
Beam packages are not registered in the top level simply because
they are not relevant to the vast majority of Nix users. They are
installable using the <literal>beamPackages</literal> attribute
set.
BEAM packages are not registered at the top level, simply because they are
not relevant to the vast majority of Nix users. They are installable using
the <literal>beam.packages.erlang</literal> attribute set (aliased as
<literal>beamPackages</literal>), which points to packages built by the
default Erlang/OTP version in Nixpkgs, as defined by
<literal>beam.interpreters.erlang</literal>.
You can list the avialable packages in the
<literal>beamPackages</literal> with the following command:
To list the available packages in
<literal>beamPackages</literal>, use the following command:
</para>
<programlisting>
@ -69,115 +129,152 @@ beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0
</programlisting>
<para>
To install any of those packages into your profile, refer to them by
their attribute path (first column):
To install any of those packages into your profile, refer to them by their
attribute path (first column):
</para>
<programlisting>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting>
<para>
The attribute path of any Beam packages corresponds to the name
of that particular package in Hex or its OTP Application/Release name.
The attribute path of any BEAM package corresponds to the name of that
particular package in <link xlink:href="https://hex.pm">Hex</link> or its
OTP Application/Release name.
</para>
</section>
<section xml:id="packaging-beam-applications">
<title>Packaging Beam Applications</title>
<title>Packaging BEAM Applications</title>
<section xml:id="packaging-erlang-applications">
<title>Erlang Applications</title>
<section xml:id="rebar3-packages">
<title>Rebar3 Packages</title>
<para>
There is a Nix functional called
<literal>buildRebar3</literal>. We use this function to make a
derivation that understands how to build the rebar3 project. For
example, the epression we use to build the <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>
project follows.
The Nix function, <literal>buildRebar3</literal>, defined in
<literal>beam.packages.erlang.buildRebar3</literal> and aliased at the
top level, can be used to build a derivation that understands how to
build a Rebar3 project. For example, we can build <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> as
follows:
</para>
<programlisting>
{stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
{ stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec {
name = "hex2nix";
version = "0.0.1";
buildRebar3 rec {
name = "hex2nix";
version = "0.0.1";
src = fetchFromGitHub {
owner = "ericbmerritt";
repo = "hex2nix";
rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
};
src = fetchFromGitHub {
owner = "ericbmerritt";
repo = "hex2nix";
rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
};
beamDeps = [ ibrowse jsx erlware_commons ];
}
</programlisting>
<para>
The only visible difference between this derivation and
something like <literal>stdenv.mkDerivation</literal> is that we
have added <literal>erlangDeps</literal> to the derivation. If
you add your Beam dependencies here they will be correctly
handled by the system.
Such derivations are callable with
<literal>beam.packages.erlang.callPackage</literal> (see <xref
linkend="erlang-call-package"/>). To call this package using the normal
<literal>callPackage</literal>, refer to dependency packages via
<literal>beamPackages</literal>, e.g.
<literal>beamPackages.ibrowse</literal>.
</para>
<para>
If your package needs to compile native code via Rebar's port
compilation mechenism. You should add <literal>compilePort =
true;</literal> to the derivation.
Notably, <literal>buildRebar3</literal> includes
<literal>beamDeps</literal>, while
<literal>stdenv.mkDerivation</literal> does not. BEAM dependencies added
there will be correctly handled by the system.
</para>
<para>
If a package needs to compile native code via Rebar3's port compilation
mechanism, add <literal>compilePort = true;</literal> to the derivation.
</para>
</section>
<section xml:id="erlang-mk-packages">
<title>Erlang.mk Packages</title>
<para>
Erlang.mk functions almost identically to Rebar. The only real
difference is that <literal>buildErlangMk</literal> is called
instead of <literal>buildRebar3</literal>
Erlang.mk functions similarly to Rebar3, except we use
<literal>buildErlangMk</literal> instead of
<literal>buildRebar3</literal>.
</para>
<programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk {
name = "cowboy";
version = "1.0.4";
src = fetchHex {
pkg = "cowboy";
version = "1.0.4";
sha256 =
"6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
{ buildErlangMk, fetchHex, cowlib, ranch }:
meta = {
description = ''Small, fast, modular HTTP server written in
Erlang.'';
license = stdenv.lib.licenses.isc;
homepage = "https://github.com/ninenines/cowboy";
};
buildErlangMk {
name = "cowboy";
version = "1.0.4";
src = fetchHex {
pkg = "cowboy";
version = "1.0.4";
sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
meta = {
description = ''
Small, fast, modular HTTP server written in Erlang
'';
license = stdenv.lib.licenses.isc;
homepage = https://github.com/ninenines/cowboy;
};
}
</programlisting>
</section>
<section xml:id="mix-packages">
<title>Mix Packages</title>
<para>
Mix functions almost identically to Rebar. The only real
difference is that <literal>buildMix</literal> is called
instead of <literal>buildRebar3</literal>
Mix functions similarly to Rebar3, except we use
<literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
</para>
<programlisting>
{ buildMix, fetchHex, plug, absinthe }:
buildMix {
name = "absinthe_plug";
version = "1.0.0";
src = fetchHex {
pkg = "absinthe_plug";
version = "1.0.0";
sha256 =
"08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
};
beamDeps = [ plug absinthe];
beamDeps = [ plug absinthe ];
meta = {
description = ''A plug for Absinthe, an experimental GraphQL
toolkit'';
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = "https://github.com/CargoSense/absinthe_plug";
homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
<para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para>
<programlisting>
{ buildHex, buildMix, plug, absinthe }:
buildHex {
name = "absinthe_plug";
version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
builder = buildMix;
beamDeps = [ plug absinthe ];
meta = {
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
@ -185,18 +282,18 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</section>
</section>
<section xml:id="how-to-develop">
<title>How to develop</title>
<title>How to Develop</title>
<section xml:id="accessing-an-environment">
<title>Accessing an Environment</title>
<para>
Often, all you want to do is be able to access a valid
environment that contains a specific package and its
dependencies. we can do that with the <literal>env</literal>
part of a derivation. For example, lets say we want to access an
erlang repl with ibrowse loaded up. We could do the following.
Often, we simply want to access a valid environment that contains a
specific package and its dependencies. We can accomplish that with the
<literal>env</literal> attribute of a derivation. For example, let's say
we want to access an Erlang REPL with <literal>ibrowse</literal> loaded
up. We could do the following:
</para>
<programlisting>
~/w/nixpkgs nix-shell -A beamPackages.ibrowse.env --run "erl"
$ nix-shell -A beamPackages.ibrowse.env --run "erl"
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G)
@ -237,20 +334,19 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
2>
</programlisting>
<para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>.That
is the key to this functionality.
Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
to this functionality.
</para>
</section>
<section xml:id="creating-a-shell">
<title>Creating a Shell</title>
<para>
Getting access to an environment often isn't enough to do real
development. Many times we need to create a
<literal>shell.nix</literal> file and do our development inside
of the environment specified by that file. This file looks a lot
like the packaging described above. The main difference is that
<literal>src</literal> points to project root and we call the
package directly.
development. Usually, we need to create a <literal>shell.nix</literal>
file and do our development inside of the environment specified therein.
This file looks a lot like the packaging described above, except that
<literal>src</literal> points to the project root and we call the package
directly.
</para>
<programlisting>
{ pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }:
@ -264,18 +360,19 @@ let
name = "hex2nix";
version = "0.1.0";
src = ./.;
erlangDeps = [ ibrowse jsx erlware_commons ];
beamDeps = [ ibrowse jsx erlware_commons ];
};
drv = beamPackages.callPackage f {};
in
drv
drv
</programlisting>
<section xml:id="building-in-a-shell">
<title>Building in a shell</title>
<title>Building in a Shell (for Mix Projects)</title>
<para>
We can leveral the support of the Derivation, regardless of
which build Derivation is called by calling the commands themselv.s
We can leverage the support of the derivation, irrespective of the build
derivation, by calling the commands themselves.
</para>
<programlisting>
# =============================================================================
@ -335,42 +432,43 @@ analyze: build plt
</programlisting>
<para>
If you add the <literal>shell.nix</literal> as described and
user rebar as follows things should simply work. Aside from the
Using a <literal>shell.nix</literal> as described (see <xref
linkend="creating-a-shell"/>) should just work. Aside from
<literal>test</literal>, <literal>plt</literal>, and
<literal>analyze</literal> the talks work just fine for all of
the build Derivations.
<literal>analyze</literal>, the Make targets work just fine for all of the
build derivations.
</para>
</section>
</section>
</section>
<section xml:id="generating-packages-from-hex-with-hex2nix">
<title>Generating Packages from Hex with Hex2Nix</title>
<title>Generating Packages from Hex with <literal>hex2nix</literal></title>
<para>
Updating the Hex packages requires the use of the
<literal>hex2nix</literal> tool. Given the path to the Erlang
modules (usually
<literal>pkgs/development/erlang-modules</literal>). It will
happily dump a file called
<literal>hex-packages.nix</literal>. That file will contain all
the packages that use a recognized build system in Hex. However,
it can't know whether or not all those packages are buildable.
Updating the <link xlink:href="https://hex.pm">Hex</link> package set
requires <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>. Given the
path to the Erlang modules (usually
<literal>pkgs/development/erlang-modules</literal>), it will dump a file
called <literal>hex-packages.nix</literal>, containing all the packages that
use a recognized build system in <link
xlink:href="https://hex.pm">Hex</link>. It can't be determined, however,
whether every package is buildable.
</para>
<para>
To make life easier for our users, it makes good sense to go
ahead and attempt to build all those packages and remove the
ones that don't build. To do that, simply run the command (in
the root of your <literal>nixpkgs</literal> repository). that follows.
To make life easier for our users, try to build every <link
xlink:href="https://hex.pm">Hex</link> package and remove those that fail.
To do that, simply run the following command in the root of your
<literal>nixpkgs</literal> repository:
</para>
<programlisting>
$ nix-build -A beamPackages
</programlisting>
<para>
That will build every package in
<literal>beamPackages</literal>. Then you can go through and
manually remove the ones that fail. Hopefully, someone will
improve <literal>hex2nix</literal> in the future to automate
that.
That will attempt to build every package in
<literal>beamPackages</literal>. Then manually remove those that fail.
Hopefully, someone will improve <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> in the
future to automate the process.
</para>
</section>
</section>

View File

@ -13,7 +13,7 @@ standard Go programs.
deis = buildGoPackage rec {
name = "deis-${version}";
version = "1.13.0";
goPackagePath = "github.com/deis/deis"; <co xml:id='ex-buildGoPackage-1' />
subPackages = [ "client" ]; <co xml:id='ex-buildGoPackage-2' />
@ -130,6 +130,9 @@ the following arguments are of special significance to the function:
</para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
<para>
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" />
where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
@ -160,7 +163,4 @@ done
</screen>
</para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
</section>

View File

@ -912,14 +912,14 @@ nix-build -A haskell.packages.integer-simple.ghc802.scientific
- The *Journey into the Haskell NG infrastructure* series of postings
describe the new Haskell infrastructure in great detail:
- [Part 1](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015591.html)
- [Part 1](https://nixos.org/nix-dev/2015-January/015591.html)
explains the differences between the old and the new code and gives
instructions how to migrate to the new setup.
- [Part 2](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015608.html)
- [Part 2](https://nixos.org/nix-dev/2015-January/015608.html)
looks in-depth at how to tweak and configure your setup by means of
overrides.
- [Part 3](http://lists.science.uu.nl/pipermail/nix-dev/2015-April/016912.html)
- [Part 3](https://nixos.org/nix-dev/2015-April/016912.html)
describes the infrastructure that keeps the Haskell package set in Nixpkgs
up-to-date.

View File

@ -8,15 +8,48 @@ date: 2016-06-25
You'll get a vim(-your-suffix) in PATH also loading the plugins you want.
Loading can be deferred; see examples.
VAM (=vim-addon-manager) and Pathogen plugin managers are supported.
Vundle, NeoBundle could be your turn.
Vim packages, VAM (=vim-addon-manager) and Pathogen are supported to load
packages.
## dependencies by Vim plugins
## Custom configuration
Adding custom .vimrc lines can be done using the following code:
```
vim_configurable.customize {
name = "vim-with-plugins";
vimrcConfig.customRC = ''
set hidden
'';
}
```
## Vim packages
To store you plugins in Vim packages the following example can be used:
```
vim_configurable.customize {
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
# loaded on launch
start = [ youcompleteme fugitive ];
# manually loadable by calling `:packadd $plugin-name`
opt = [ phpCompletion elm-vim ];
# To automatically load a plugin when opening a filetype, add vimrc lines like:
# autocmd FileType php :packadd phpCompletion
}
};
```
## VAM
### dependencies by Vim plugins
VAM introduced .json files supporting dependencies without versioning
assuming that "using latest version" is ok most of the time.
## HOWTO
### Example
First create a vim-scripts file having one plugin name per line. Example:

View File

@ -212,7 +212,7 @@ $ nix-env -f . -iA libfoo</screen>
<listitem>
<para>Optionally commit the new package and open a pull request, or send a patch to
<literal>nix-dev@cs.uu.nl</literal>.</para>
<literal>https://groups.google.com/forum/#!forum/nix-devel</literal>.</para>
</listitem>

View File

@ -640,6 +640,16 @@ script) if it exists.</para>
true.</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>configurePlatforms</varname></term>
<listitem><para>
By default, when cross compiling, the configure script has <option>--build=...</option> and <option>--host=...</option> passed.
Packages can instead pass <literal>[ "build" "host" "target" ]</literal> or a subset to control exactly which platform flags are passed.
Compilers and other tools should use this to also pass the target platform, for example.
Note eventually these will be passed when in native builds too, to improve determinism: build-time guessing, as is done today, is a risk of impurity.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>preConfigure</varname></term>
<listitem><para>Hook executed at the start of the configure

View File

@ -51,6 +51,24 @@ rec {
else { }));
/* `makeOverridable` takes a function from attribute set to attribute set and
injects `override` attibute which can be used to override arguments of
the function.
nix-repl> x = {a, b}: { result = a + b; }
nix-repl> y = lib.makeOverridable x { a = 1; b = 2; }
nix-repl> y
{ override = «lambda»; overrideDerivation = «lambda»; result = 3; }
nix-repl> y.override { a = 10; }
{ override = «lambda»; overrideDerivation = «lambda»; result = 12; }
Please refer to "Nixpkgs Contributors Guide" section
"<pkg>.overrideDerivation" to learn about `overrideDerivation` and caveats
related to its use.
*/
makeOverridable = f: origArgs:
let
ff = f origArgs;

View File

@ -20,8 +20,32 @@ rec {
traceXMLValMarked = str: x: trace (str + builtins.toXML x) x;
# strict trace functions (traced structure is fully evaluated and printed)
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first. */
traceSeq = x: y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only down to depth n.
* This is very useful because lots of `traceSeq` usages
* lead to an infinite recursion.
*/
traceSeqN = depth: x: y: with lib;
let snip = v: if isList v then noQuotes "[]" v
else if isAttrs v then noQuotes "{}" v
else v;
noQuotes = str: v: { __pretty = const str; val = v; };
modify = n: fn: v: if (n == 0) then fn v
else if isList v then map (modify (n - 1) fn) v
else if isAttrs v then mapAttrs
(const (modify (n - 1) fn)) v
else v;
in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y;
/* `traceSeq`, but the same value is traced and returned */
traceValSeq = v: traceVal (builtins.deepSeq v v);
/* `traceValSeq` but with fixed depth */
traceValSeqN = depth: v: traceSeqN depth v v;
# this can help debug your code as well - designed to not produce thousands of lines
traceShowVal = x: trace (showVal x) x;

View File

@ -423,4 +423,12 @@ rec {
else if isInt x then "int"
else "string";
/* deprecated:
For historical reasons, imap has an index starting at 1.
But for consistency with the rest of the library we want an index
starting at zero.
*/
imap = imap1;
}

View File

@ -90,4 +90,41 @@ rec {
* parsers as well.
*/
toYAML = {}@args: toJSON args;
/* Pretty print a value, akin to `builtins.trace`.
* Should probably be a builtin as well.
*/
toPretty = {
/* If this option is true, attrsets like { __pretty = fn; val = ; }
will use fn to convert val to a pretty printed representation.
(This means fn is type Val -> String.) */
allowPrettyValues ? false
}@args: v: with builtins;
if isInt v then toString v
else if isBool v then (if v == true then "true" else "false")
else if isString v then "\"" + v + "\""
else if null == v then "null"
else if isFunction v then
let fna = functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name)
fna);
in if fna == {} then "<λ>"
else "<λ:{${showFnas}}>"
else if isList v then "[ "
+ libStr.concatMapStringsSep " " (toPretty args) v
+ " ]"
else if isAttrs v then
# apply pretty values if allowed
if attrNames v == [ "__pretty" "val" ] && allowPrettyValues
then v.__pretty v.val
# TODO: there is probably a better representation?
else if v ? type && v.type == "derivation" then "<δ>"
else "{ "
+ libStr.concatStringsSep " " (libAttr.mapAttrsToList
(name: value:
"${toPretty args name} = ${toPretty args value};") v)
+ " }"
else "toPretty: should never happen (v = ${v})";
}

View File

@ -77,15 +77,21 @@ rec {
*/
foldl' = builtins.foldl' or foldl;
/* Map with index
FIXME(zimbatm): why does this start to count at 1?
/* Map with index starting from 0
Example:
imap (i: v: "${v}-${toString i}") ["a" "b"]
imap0 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-0" "b-1" ]
*/
imap0 = f: list: genList (n: f n (elemAt list n)) (length list);
/* Map with index starting from 1
Example:
imap1 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ]
*/
imap = f: list: genList (n: f (n + 1) (elemAt list n)) (length list);
imap1 = f: list: genList (n: f (n + 1) (elemAt list n)) (length list);
/* Map and concatenate the result.
@ -471,4 +477,12 @@ rec {
*/
subtractLists = e: filter (x: !(elem x e));
/* Test if two lists have no common element.
It should be slightly more efficient than (intersectLists a b == [])
*/
mutuallyExclusive = a: b:
(builtins.length a) == 0 ||
(!(builtins.elem (builtins.head a) b) &&
mutuallyExclusive (builtins.tail a) b);
}

View File

@ -16,6 +16,7 @@
acowley = "Anthony Cowley <acowley@gmail.com>";
adelbertc = "Adelbert Chang <adelbertc@gmail.com>";
adev = "Adrien Devresse <adev@adev.name>";
adisbladis = "Adam Hose <adis@blad.is>";
Adjective-Object = "Maxwell Huang-Hobbs <mhuan13@gmail.com>";
adnelson = "Allen Nelson <ithinkican@gmail.com>";
adolfogc = "Adolfo E. García Castro <adolfo.garcia.cr@gmail.com>";
@ -42,6 +43,7 @@
andrewrk = "Andrew Kelley <superjoe30@gmail.com>";
andsild = "Anders Sildnes <andsild@gmail.com>";
aneeshusa = "Aneesh Agrawal <aneeshusa@gmail.com>";
ankhers = "Justin Wood <justin.k.wood@gmail.com>";
antono = "Antono Vasiljev <self@antono.info>";
apeschar = "Albert Peschar <albert@peschar.net>";
apeyroux = "Alexandre Peyroux <alex@px.io>";
@ -61,6 +63,7 @@
bachp = "Pascal Bach <pascal.bach@nextrem.ch>";
badi = "Badi' Abdul-Wahid <abdulwahidc@gmail.com>";
balajisivaraman = "Balaji Sivaraman<sivaraman.balaji@gmail.com>";
barrucadu = "Michael Walker <mike@barrucadu.co.uk>";
basvandijk = "Bas van Dijk <v.dijk.bas@gmail.com>";
Baughn = "Svein Ove Aas <sveina@gmail.com>";
bcarrell = "Brandon Carrell <brandoncarrell@gmail.com>";
@ -141,17 +144,20 @@
dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>";
dgonyeo = "Derek Gonyeo <derek@gonyeo.com>";
dipinhora = "Dipin Hora <dipinhora+github@gmail.com>";
disassembler = "Samuel Leathers <disasm@gmail.com>";
dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>";
DmitryTsygankov = "Dmitry Tsygankov <dmitry.tsygankov@gmail.com>";
dmjio = "David Johnson <djohnson.m@gmail.com>";
dochang = "Desmond O. Chang <dochang@gmail.com>";
domenkozar = "Domen Kozar <domen@dev.si>";
dotlambda = "Robert Schütz <rschuetz17@gmail.com>";
doublec = "Chris Double <chris.double@double.co.nz>";
dpaetzel = "David Pätzel <david.a.paetzel@gmail.com>";
drets = "Dmytro Rets <dmitryrets@gmail.com>";
drewkett = "Andrew Burkett <burkett.andrew@gmail.com>";
dsferruzza = "David Sferruzza <david.sferruzza@gmail.com>";
dtzWill = "Will Dietz <nix@wdtz.org>";
dywedir = "Vladyslav M. <dywedir@protonmail.ch>";
e-user = "Alexander Kahl <nixos@sodosopa.io>";
ebzzry = "Rommel Martinez <ebzzry@gmail.com>";
edanaher = "Evan Danaher <nixos@edanaher.net>";
@ -166,6 +172,7 @@
ekleog = "Leo Gaspard <leo@gaspard.io>";
elasticdog = "Aaron Bull Schaefer <aaron@elasticdog.com>";
eleanor = "Dejan Lukan <dejan@proteansec.com>";
elijahcaine = "Elijah Caine <elijahcainemv@gmail.com>";
elitak = "Eric Litak <elitak@gmail.com>";
ellis = "Ellis Whitehead <nixos@ellisw.net>";
eperuffo = "Emanuele Peruffo <info@emanueleperuffo.com>";
@ -181,6 +188,7 @@
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
fare = "Francois-Rene Rideau <fahree@gmail.com>";
falsifian = "James Cook <james.cook@utoronto.ca>";
florianjacob = "Florian Jacob <projects+nixos@florianjacob.de>";
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
fmthoma = "Franz Thoma <f.m.thoma@googlemail.com>";
@ -212,6 +220,7 @@
goodrone = "Andrew Trachenko <goodrone@gmail.com>";
gpyh = "Yacine Hmito <yacine.hmito@gmail.com>";
grahamc = "Graham Christensen <graham@grahamc.com>";
grburst = "Julius Elias <grburst@openmailbox.org>";
gridaphobe = "Eric Seidel <eric@seidel.io>";
guibert = "David Guibert <david.guibert@gmail.com>";
guillaumekoenig = "Guillaume Koenig <guillaume.edward.koenig@gmail.com>";
@ -220,8 +229,10 @@
havvy = "Ryan Scheel <ryan.havvy@gmail.com>";
hbunke = "Hendrik Bunke <bunke.hendrik@gmail.com>";
hce = "Hans-Christian Esperer <hc@hcesperer.org>";
hectorj = "Hector Jusforgues <hector.jusforgues+nixos@gmail.com>";
heel = "Sergii Paryzhskyi <parizhskiy@gmail.com>";
henrytill = "Henry Till <henrytill@gmail.com>";
hhm = "hhm <heehooman+nixpkgs@gmail.com>";
hinton = "Tom Hinton <t@larkery.com>";
hodapp = "Chris Hodapp <hodapp87@gmail.com>";
hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>";
@ -245,6 +256,7 @@
jensbin = "Jens Binkert <jensbin@protonmail.com>";
jerith666 = "Matt McHenry <github@matt.mchenryfamily.org>";
jfb = "James Felix Black <james@yamtime.com>";
jfrankenau = "Johannes Frankenau <johannes@frankenau.net>";
jgeerds = "Jascha Geerds <jascha@jgeerds.name>";
jgertm = "Tim Jaeger <jger.tm@gmail.com>";
jgillich = "Jakob Gillich <jakob@gillich.me>";
@ -257,12 +269,14 @@
joelmo = "Joel Moberg <joel.moberg@gmail.com>";
joelteon = "Joel Taylor <me@joelt.io>";
johbo = "Johannes Bornhold <johannes@bornhold.name>";
johnramsden = "John Ramsden <johnramsden@riseup.net>";
joko = "Ioannis Koutras <ioannis.koutras@gmail.com>";
jonafato = "Jon Banafato <jon@jonafato.com>";
jpbernardy = "Jean-Philippe Bernardy <jeanphilippe.bernardy@gmail.com>";
jpierre03 = "Jean-Pierre PRUNARET <nix@prunetwork.fr>";
jpotier = "Martin Potier <jpo.contributes.to.nixos@marvid.fr>";
jraygauthier = "Raymond Gauthier <jraygauthier@gmail.com>";
jtojnar = "Jan Tojnar <jtojnar@gmail.com>";
juliendehos = "Julien Dehos <dehos@lisic.univ-littoral.fr>";
jwiegley = "John Wiegley <johnw@newartisans.com>";
jwilberding = "Jordan Wilberding <jwilberding@afiniate.com>";
@ -308,6 +322,7 @@
luispedro = "Luis Pedro Coelho <luis@luispedro.org>";
lukego = "Luke Gorrie <luke@snabb.co>";
lw = "Sergey Sofeychuk <lw@fmap.me>";
lyt = "Tim Liou <wheatdoge@gmail.com>";
m3tti = "Mathaeus Sander <mathaeus.peter.sander@gmail.com>";
ma27 = "Maximilian Bosch <maximilian@mbosch.me>";
madjar = "Georges Dubus <georges.dubus@compiletoi.net>";
@ -373,11 +388,12 @@
nand0p = "Fernando Jose Pando <nando@hex7.com>";
Nate-Devv = "Nathan Moore <natedevv@gmail.com>";
nathan-gs = "Nathan Bijnens <nathan@nathan.gs>";
nckx = "Tobias Geerinckx-Rice <tobias.geerinckx.rice@gmail.com>";
nckx = "Tobias Geerinckx-Rice <github@tobias.gr>";
ndowens = "Nathan Owens <ndowens04@gmail.com>";
neeasade = "Nathan Isom <nathanisom27@gmail.com>";
nequissimus = "Tim Steinbach <tim@nequissimus.com>";
nfjinjing = "Jinjing Wang <nfjinjing@gmail.com>";
nh2 = "Niklas Hambüchen <mail@nh2.me>";
nhooyr = "Anmol Sethi <anmol@aubble.com>";
nickhu = "Nick Hu <me@nickhu.co.uk>";
nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>";
@ -397,6 +413,7 @@
okasu = "Okasu <oka.sux@gmail.com>";
olcai = "Erik Timan <dev@timan.info>";
olejorgenb = "Ole Jørgen Brønner <olejorgenb@yahoo.no>";
olynch = "Owen Lynch <owen@olynch.me>";
orbekk = "KJ Ørbekk <kjetil.orbekk@gmail.com>";
orbitz = "Malcolm Matalka <mmatalka@gmail.com>";
orivej = "Orivej Desh <orivej@gmx.fr>";
@ -467,6 +484,7 @@
rob = "Rob Vermaas <rob.vermaas@gmail.com>";
robberer = "Longrin Wischnewski <robberer@freakmail.de>";
robbinch = "Robbin C. <robbinch33@gmail.com>";
roberth = "Robert Hensing <nixpkgs@roberthensing.nl>";
robgssp = "Rob Glossop <robgssp@gmail.com>";
roblabla = "Robin Lambertz <robinlambertz+dev@gmail.com>";
roconnor = "Russell O'Connor <roconnor@theorem.ca>";
@ -477,6 +495,7 @@
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>";
rvlander = "Gaëtan André <rvlander@gaetanandre.eu>";
rvolosatovs = "Roman Volosatovs <rvolosatovs@riseup.net";
ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>";
ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>";
ryantm = "Ryan Mulligan <ryan@ryantm.com>";
@ -524,6 +543,7 @@
steveej = "Stefan Junker <mail@stefanjunker.de>";
SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>";
swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>";
swflint = "Samuel W. Flint <swflint@flintfam.org>";
swistak35 = "Rafał Łasocha <me@swistak35.com>";
szczyp = "Szczyp <qb@szczyp.com>";
sztupi = "Attila Sztupak <attila.sztupak@gmail.com>";
@ -546,18 +566,22 @@
tohl = "Tomas Hlavaty <tom@logand.com>";
tokudan = "Daniel Frank <git@danielfrank.net>";
tomberek = "Thomas Bereknyei <tomberek@gmail.com>";
tomsmeets = "Tom Smeets <tom@tsmeets.nl>";
travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>";
trevorj = "Trevor Joynson <nix@trevor.joynson.io>";
trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>";
tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>";
ttuegel = "Thomas Tuegel <ttuegel@mailbox.org>";
tv = "Tomislav Viljetić <tv@shackspace.de>";
tvestelind = "Tomas Vestelind <tomas.vestelind@fripost.org>";
tvorog = "Marsel Zaripov <marszaripov@gmail.com>";
tweber = "Thorsten Weber <tw+nixpkgs@360vier.de>";
twey = "James Twey Kay <twey@twey.co.uk>";
uralbash = "Svintsov Dmitry <root@uralbash.ru>";
utdemir = "Utku Demir <me@utdemir.com>";
#urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>"; inactive since 2012
uwap = "uwap <me@uwap.name>";
vaibhavsagar = "Vaibhav Sagar <vaibhavsagar@gmail.com>";
vandenoever = "Jos van den Oever <jos@vandenoever.info>";
vanzef = "Ivan Solyankin <vanzef@gmail.com>";
vbgl = "Vincent Laporte <Vincent.Laporte@gmail.com>";
@ -577,6 +601,7 @@
volth = "Jaroslavas Pocepko <jaroslavas@volth.com>";
vozz = "Oliver Hunt <oliver.huntuk@gmail.com>";
vrthra = "Rahul Gopinath <rahul@gopinath.org>";
vyp = "vyp <elisp.vim@gmail.com>";
wedens = "wedens <kirill.wedens@gmail.com>";
willtim = "Tim Philip Williams <tim.williams.public@gmail.com>";
winden = "Antonio Vargas Gonzalez <windenntw@gmail.com>";
@ -598,6 +623,7 @@
z77z = "Marco Maggesi <maggesi@math.unifi.it>";
zagy = "Christian Zagrodnick <cz@flyingcircus.io>";
zalakain = "Unai Zalakain <contact@unaizalakain.info>";
zarelit = "David Costa <david@zarel.net>";
zauberpony = "Elmar Athmer <elmar@athmer.org>";
zef = "Zef Hemel <zef@zef.me>";
zimbatm = "zimbatm <zimbatm@zimbatm.com>";

View File

@ -98,7 +98,7 @@ rec {
/* Close a set of modules under the imports relation. */
closeModules = modules: args:
let
toClosureList = file: parentKey: imap (n: x:
toClosureList = file: parentKey: imap1 (n: x:
if isAttrs x || isFunction x then
let key = "${parentKey}:anon-${toString n}"; in
unifyModuleSyntax file key (unpackSubmodule (applyIfFunction key) x args)

View File

@ -33,7 +33,7 @@ rec {
concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar"
*/
concatImapStrings = f: list: concatStrings (lib.imap f list);
concatImapStrings = f: list: concatStrings (lib.imap1 f list);
/* Place an element between each element of a list
@ -70,7 +70,7 @@ rec {
concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]
=> "6-3-2"
*/
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap f list);
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap1 f list);
/* Construct a Unix-style search path consisting of each `subDir"
directory of the given list of packages.

View File

@ -1,20 +1,22 @@
with import ./parse.nix;
with import ../attrsets.nix;
with import ../lists.nix;
rec {
patterns = {
patterns = rec {
"32bit" = { cpu = { bits = 32; }; };
"64bit" = { cpu = { bits = 64; }; };
i686 = { cpu = cpuTypes.i686; };
x86_64 = { cpu = cpuTypes.x86_64; };
PowerPC = { cpu = cpuTypes.powerpc; };
x86 = { cpu = { family = "x86"; }; };
Arm = { cpu = { family = "arm"; }; };
Mips = { cpu = { family = "mips"; }; };
BigEndian = { cpu = { significantByte = significantBytes.bigEndian; }; };
LittleEndian = { cpu = { significantByte = significantBytes.littleEndian; }; };
Unix = { kernel = { families = { inherit (kernelFamilies) unix; }; }; };
BSD = { kernel = { families = { inherit (kernelFamilies) bsd; }; }; };
Unix = [ BSD Darwin Linux SunOS Hurd Cygwin ];
Darwin = { kernel = kernels.darwin; };
Linux = { kernel = kernels.linux; };
@ -27,11 +29,15 @@ rec {
Cygwin = { kernel = kernels.windows; abi = abis.cygnus; };
MinGW = { kernel = kernels.windows; abi = abis.gnu; };
Arm32 = recursiveUpdate patterns.Arm patterns."32bit";
Arm64 = recursiveUpdate patterns.Arm patterns."64bit";
Arm32 = recursiveUpdate Arm patterns."32bit";
Arm64 = recursiveUpdate Arm patterns."64bit";
};
matchAnyAttrs = patterns:
if builtins.isList patterns then attrs: any (pattern: matchAttrs pattern attrs) patterns
else matchAttrs patterns;
predicates = mapAttrs'
(name: value: nameValuePair ("is" + name) (matchAttrs value))
(name: value: nameValuePair ("is" + name) (matchAnyAttrs value))
patterns;
}

View File

@ -44,7 +44,7 @@ rec {
i686 = { bits = 32; significantByte = littleEndian; family = "x86"; };
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; };
mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "powerpc"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "power"; };
};
isVendor = isType "vendor";
@ -68,21 +68,20 @@ rec {
isKernelFamily = isType "kernel-family";
kernelFamilies = setTypes "kernel-family" {
bsd = {};
unix = {};
};
isKernel = x: isType "kernel" x;
kernels = with execFormats; with kernelFamilies; setTypesAssert "kernel"
(x: isExecFormat x.execFormat && all isKernelFamily (attrValues x.families))
{
darwin = { execFormat = macho; families = { inherit unix; }; };
freebsd = { execFormat = elf; families = { inherit unix bsd; }; };
hurd = { execFormat = elf; families = { inherit unix; }; };
linux = { execFormat = elf; families = { inherit unix; }; };
netbsd = { execFormat = elf; families = { inherit unix bsd; }; };
none = { execFormat = unknown; families = { inherit unix; }; };
openbsd = { execFormat = elf; families = { inherit unix bsd; }; };
solaris = { execFormat = elf; families = { inherit unix; }; };
darwin = { execFormat = macho; families = { }; };
freebsd = { execFormat = elf; families = { inherit bsd; }; };
hurd = { execFormat = elf; families = { }; };
linux = { execFormat = elf; families = { }; };
netbsd = { execFormat = elf; families = { inherit bsd; }; };
none = { execFormat = unknown; families = { }; };
openbsd = { execFormat = elf; families = { inherit bsd; }; };
solaris = { execFormat = elf; families = { }; };
windows = { execFormat = pe; families = { }; };
} // { # aliases
# TODO(@Ericson2314): Handle these Darwin version suffixes more generally.
@ -164,7 +163,7 @@ rec {
mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s));
doubleFromSystem = { cpu, vendor, kernel, abi, ... }:
if vendor == kernels.windows && abi == abis.cygnus
if abi == abis.cygnus
then "${cpu.name}-cygwin"
else "${cpu.name}-${kernel.name}";

View File

@ -285,6 +285,38 @@ runTests {
expected = builtins.toJSON val;
};
testToPretty = {
expr = mapAttrs (const (generators.toPretty {})) rec {
int = 42;
bool = true;
string = "fnord";
null_ = null;
function = x: x;
functionArgs = { arg ? 4, foo }: arg;
list = [ 3 4 function [ false ] ];
attrs = { foo = null; "foo bar" = "baz"; };
drv = derivation { name = "test"; system = builtins.currentSystem; };
};
expected = rec {
int = "42";
bool = "true";
string = "\"fnord\"";
null_ = "null";
function = "<λ>";
functionArgs = "<λ:{(arg),foo}>";
list = "[ 3 4 ${function} [ false ] ]";
attrs = "{ \"foo\" = null; \"foo bar\" = \"baz\"; }";
drv = "<δ>";
};
};
testToPrettyAllowPrettyValues = {
expr = generators.toPretty { allowPrettyValues = true; }
{ __pretty = v: "«" + v + "»"; val = "foo"; };
expected = "«foo»";
};
# MISC
testOverridableDelayableArgsTest = {

View File

@ -179,9 +179,9 @@ rec {
description = "list of ${elemType.description}s";
check = isList;
merge = loc: defs:
map (x: x.value) (filter (x: x ? value) (concatLists (imap (n: def:
map (x: x.value) (filter (x: x ? value) (concatLists (imap1 (n: def:
if isList def.value then
imap (m: def':
imap1 (m: def':
(mergeDefinitions
(loc ++ ["[definition ${toString n}-entry ${toString m}]"])
elemType
@ -220,7 +220,7 @@ rec {
if isList def.value then
{ inherit (def) file;
value = listToAttrs (
imap (elemIdx: elem:
imap1 (elemIdx: elem:
{ name = elem.name or "unnamed-${toString defIdx}.${toString elemIdx}";
value = elem;
}) def.value);
@ -233,7 +233,7 @@ rec {
name = "loaOf";
description = "list or attribute set of ${elemType.description}s";
check = x: isList x || isAttrs x;
merge = loc: defs: attrOnly.merge loc (imap convertIfList defs);
merge = loc: defs: attrOnly.merge loc (imap1 convertIfList defs);
getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name?>"]);
getSubModules = elemType.getSubModules;
substSubModules = m: loaOf (elemType.substSubModules m);

View File

@ -6,7 +6,7 @@ GNOME_FTP="ftp.gnome.org/pub/GNOME/sources"
# projects that don't follow the GNOME major versioning, or that we don't want to
# programmatically update
NO_GNOME_MAJOR="gtkhtml gdm"
NO_GNOME_MAJOR="ghex gtkhtml gdm"
usage() {
echo "Usage: $0 gnome_dir <show project>|<update project>|<update-all> [major.minor]" >&2

View File

@ -91,7 +91,9 @@ def _get_latest_version_pypi(package, extension):
if release['filename'].endswith(extension):
# TODO: In case of wheel we need to do further checks!
sha256 = release['digests']['sha256']
break
else:
sha256 = None
return version, sha256

View File

@ -65,7 +65,7 @@ let
chmod -R u+w .
ln -s ${modulesDoc} configuration/modules.xml
ln -s ${optionsDocBook} options-db.xml
echo "${version}" > version
printf "%s" "${version}" > version
'';
toc = builtins.toFile "toc.xml"
@ -94,25 +94,43 @@ let
"--stringparam chunk.toc ${toc}"
];
manual-combined = runCommand "nixos-manual-combined"
{ inherit sources;
buildInputs = [ libxml2 libxslt ];
meta.description = "The NixOS manual as plain docbook XML";
}
''
${copySources}
xmllint --xinclude --output ./manual-combined.xml ./manual.xml
xmllint --xinclude --noxincludenode \
--output ./man-pages-combined.xml ./man-pages.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual-combined.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
man-pages-combined.xml
mkdir $out
cp manual-combined.xml $out/
cp man-pages-combined.xml $out/
'';
olinkDB = runCommand "manual-olinkdb"
{ inherit sources;
buildInputs = [ libxml2 libxslt ];
}
''
${copySources}
xsltproc \
${manualXsltprocOptions} \
--stringparam collect.xref.targets only \
--stringparam targets.filename "$out/manual.db" \
--nonet --xinclude \
--nonet \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
./manual.xml
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
${manual-combined}/manual-combined.xml
cat > "$out/olinkdb.xml" <<EOF
<?xml version="1.0" encoding="utf-8"?>
@ -158,21 +176,15 @@ in rec {
allowedReferences = ["out"];
}
''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the HTML manual.
dst=$out/share/doc/nixos
mkdir -p $dst
xsltproc \
${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/ \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl ./manual.xml
--nonet --output $dst/ \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/images/callouts
cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/
@ -190,13 +202,6 @@ in rec {
buildInputs = [ libxml2 libxslt zip ];
}
''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the epub manual.
dst=$out/share/doc/nixos
@ -204,10 +209,11 @@ in rec {
${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/epub/ \
${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl ./manual.xml
${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/epub/OEBPS/images/callouts
cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts
cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts # */
echo "application/epub+zip" > mimetype
manual="$dst/nixos-manual.epub"
zip -0Xq "$manual" mimetype
@ -227,23 +233,16 @@ in rec {
allowedReferences = ["out"];
}
''
${copySources}
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
# Generate manpages.
mkdir -p $out/share/man
xsltproc --nonet --xinclude \
xsltproc --nonet \
--param man.output.in.separate.dir 1 \
--param man.output.base.dir "'$out/share/man/'" \
--param man.endnotes.are.numbered 0 \
--param man.break.after.slash 1 \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \
./man-pages.xml
${manual-combined}/man-pages-combined.xml
'';
}

View File

@ -18,7 +18,7 @@
<para>If you encounter problems, please report them on the
<literal
xlink:href="http://lists.science.uu.nl/mailman/listinfo/nix-dev">nix-dev@lists.science.uu.nl</literal>
xlink:href="https://groups.google.com/forum/#!forum/nix-devel">nix-devel</literal>
mailing list or on the <link
xlink:href="irc://irc.freenode.net/#nixos">
<literal>#nixos</literal> channel on Freenode</link>. Bugs should

View File

@ -28,7 +28,7 @@ has the following highlights:</para>
since version 0.0 as well as the most recent <link
xlink:href="http://www.stackage.org/">Stackage Nightly</link>
snapshot. The announcement <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2015-September/018138.html">&quot;Full
xlink:href="https://nixos.org/nix-dev/2015-September/018138.html">&quot;Full
Stackage Support in Nixpkgs&quot;</link> gives additional
details.</para>
</listitem>

View File

@ -78,13 +78,13 @@ following incompatible changes:</para>
our package set it loosely based on the latest available LTS release, i.e.
LTS 7.x at the time of this writing. New releases of NixOS and Nixpkgs will
drop those old names entirely. <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2016-June/020585.html">The
xlink:href="https://nixos.org/nix-dev/2016-June/020585.html">The
motivation for this change</link> has been discussed at length on the
<literal>nix-dev</literal> mailing list and in <link
xlink:href="https://github.com/NixOS/nixpkgs/issues/14897">Github issue
#14897</link>. Development strategies for Haskell hackers who want to rely
on Nix and NixOS have been described in <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2016-June/020642.html">another
xlink:href="https://nixos.org/nix-dev/2016-June/020642.html">another
nix-dev article</link>.</para>
</listitem>

View File

@ -315,7 +315,7 @@ following incompatible changes:</para>
let
pkgs = import &lt;nixpkgs&gt; {};
in
import pkgs.path { overlays = [(self: super: ...)] }
import pkgs.path { overlays = [(self: super: ...)]; }
</programlisting>
</para>

View File

@ -85,6 +85,10 @@ rmdir /var/lib/ipfs/.ipfs
</para>
</listitem>
<listitem>
<para>
The following changes apply if the <literal>stateVersion</literal> is changed to 17.09 or higher.
For <literal>stateVersion = "17.03</literal> or lower the old behavior is preserved.
</para>
<para>
The <literal>postgres</literal> default version was changed from 9.5 to 9.6.
</para>
@ -94,6 +98,9 @@ rmdir /var/lib/ipfs/.ipfs
<para>
The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example.
</para>
<para>
The <literal>mysql</literal> default <literal>dataDir</literal> has changed from <literal>/var/mysql</literal> to <literal>/var/lib/mysql</literal>.
</para>
</listitem>
<listitem>
<para>
@ -113,9 +120,18 @@ rmdir /var/lib/ipfs/.ipfs
also serve as a SSH agent if <literal>enableSSHSupport</literal> is set.
</para>
</listitem>
<listitem>
<para>
The <literal>services.tinc.networks.&lt;name&gt;.listenAddress</literal>
option had a misleading name that did not correspond to its behavior. It
now correctly defines the ip to listen for incoming connections on. To
keep the previous behaviour, use
<literal>services.tinc.networks.&lt;name&gt;.bindToAddress</literal>
instead. Refer to the description of the options for more details.
</para>
</listitem>
</itemizedlist>
<para>Other notable improvements:</para>
<itemizedlist>

View File

@ -219,8 +219,8 @@ sub waitForMonitorPrompt {
sub retry {
my ($coderef) = @_;
my $n;
for ($n = 0; $n < 900; $n++) {
return if &$coderef;
for ($n = 899; $n >=0; $n--) {
return if &$coderef($n);
sleep 1;
}
die "action timed out after $n seconds";
@ -518,6 +518,12 @@ sub waitUntilTTYMatches {
$self->nest("waiting for $regexp to appear on tty $tty", sub {
retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on TTY$tty, which currently contains:");
$self->log($self->getTTYText($tty));
}
return 1 if $self->getTTYText($tty) =~ /$regexp/;
}
});
@ -566,6 +572,12 @@ sub waitForText {
my ($self, $regexp) = @_;
$self->nest("waiting for $regexp to appear on the screen", sub {
retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the screen, which currently contains:");
$self->log($self->getScreenText);
}
return 1 if $self->getScreenText =~ /$regexp/;
}
});
@ -600,6 +612,13 @@ sub waitForWindow {
$self->nest("waiting for a window to appear", sub {
retry sub {
my @names = $self->getWindowNames;
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the the window list, which currently contains:");
$self->log(join(", ", @names));
}
foreach my $n (@names) {
return 1 if $n =~ /$regexp/;
}

View File

@ -19,7 +19,6 @@ let
bind_policy ${config.users.ldap.bind.policy}
${optionalString config.users.ldap.useTLS ''
ssl start_tls
tls_checkpeer no
''}
${optionalString (config.users.ldap.bind.distinguishedName != "") ''
binddn ${config.users.ldap.bind.distinguishedName}

View File

@ -223,7 +223,9 @@ in
'';
} // optionalAttrs config.services.resolved.enable {
"resolv.conf".source = "/run/systemd/resolve/resolv.conf";
# symlink the static version of resolv.conf as recommended by upstream:
# https://www.freedesktop.org/software/systemd/man/systemd-resolved.html#/etc/resolv.conf
"resolv.conf".source = "${pkgs.systemd}/lib/systemd/resolv.conf";
} // optionalAttrs (config.services.resolved.enable && dnsmasqResolve) {
"dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf";
};

View File

@ -6,24 +6,29 @@ with lib;
let
inherit (config.services.avahi) nssmdns;
inherit (config.services.samba) nsswins;
ldap = (config.users.ldap.enable && config.users.ldap.nsswitch);
sssd = config.services.sssd.enable;
resolved = config.services.resolved.enable;
# only with nscd up and running we can load NSS modules that are not integrated in NSS
canLoadExternalModules = config.services.nscd.enable;
myhostname = canLoadExternalModules;
mymachines = canLoadExternalModules;
nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
nsswins = canLoadExternalModules && config.services.samba.nsswins;
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
sssd = canLoadExternalModules && config.services.sssd.enable;
resolved = canLoadExternalModules && config.services.resolved.enable;
hostArray = [ "files" "mymachines" ]
hostArray = [ "files" ]
++ optionals mymachines [ "mymachines" ]
++ optionals nssmdns [ "mdns_minimal [!UNAVAIL=return]" ]
++ optionals nsswins [ "wins" ]
++ optionals resolved ["resolv [!UNAVAIL=return]"]
++ optionals resolved ["resolve [!UNAVAIL=return]"]
++ [ "dns" ]
++ optionals nssmdns [ "mdns" ]
++ ["myhostname" ];
++ optionals myhostname ["myhostname" ];
passwdArray = [ "files" ]
++ optional sssd "sss"
++ optionals ldap [ "ldap" ]
++ [ "mymachines" ];
++ optionals mymachines [ "mymachines" ];
shadowArray = [ "files" ]
++ optional sssd "sss"
@ -36,6 +41,7 @@ in {
options = {
# NSS modules. Hacky!
# Only works with nscd!
system.nssModules = mkOption {
type = types.listOf types.path;
internal = true;
@ -55,6 +61,18 @@ in {
};
config = {
assertions = [
{
# generic catch if the NixOS module adding to nssModules does not prevent it with specific message.
assertion = config.system.nssModules.path != "" -> canLoadExternalModules;
message = "Loading NSS modules from path ${config.system.nssModules.path} requires nscd being enabled.";
}
{
# resolved does not need to add to nssModules, therefore needs an extra assertion
assertion = resolved -> canLoadExternalModules;
message = "Loading systemd-resolved's nss-resolve NSS module requires nscd being enabled.";
}
];
# Name Service Switch configuration file. Required by the C
# library. !!! Factor out the mdns stuff. The avahi module
@ -78,7 +96,7 @@ in {
# configured IP addresses, or ::1 and 127.0.0.2 as
# fallbacks. Systemd also provides nss-mymachines to return IP
# addresses of local containers.
system.nssModules = [ config.systemd.package.out ];
system.nssModules = optionals canLoadExternalModules [ config.systemd.package.out ];
};
}

View File

@ -6,6 +6,7 @@ with lib;
let
cfg = config.hardware.pulseaudio;
alsaCfg = config.sound;
systemWide = cfg.enable && cfg.systemWide;
nonSystemWide = cfg.enable && !cfg.systemWide;
@ -76,6 +77,7 @@ let
ctl.!default {
type pulse
}
${alsaCfg.extraConfig}
'');
in {

View File

@ -115,7 +115,6 @@ in
"/share/mime"
"/share/nano"
"/share/org"
"/share/terminfo"
"/share/themes"
"/share/vim-plugins"
"/share/vulkan"

View File

@ -0,0 +1,33 @@
# This module manages the terminfo database
# and its integration in the system.
{ config, ... }:
{
config = {
environment.pathsToLink = [
"/share/terminfo"
];
environment.etc."terminfo" = {
source = "${config.system.path}/share/terminfo";
};
environment.profileRelativeEnvVars = {
TERMINFO_DIRS = [ "/share/terminfo" ];
};
environment.extraInit = ''
# reset TERM with new TERMINFO available (if any)
export TERM=$TERM
'';
security.sudo.extraConfig = ''
# Keep terminfo database for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
'';
};
}

View File

@ -1,5 +1,5 @@
{
x86_64-linux = "/nix/store/crqd5wmrqipl4n1fcm5kkc1zg4sj80js-nix-1.11.11";
i686-linux = "/nix/store/wsjn14xp5ja509d4dxb1c78zhirw0b5x-nix-1.11.11";
x86_64-darwin = "/nix/store/zqkqnhk85g2shxlpb04y72h1i3db3gpl-nix-1.11.11";
x86_64-linux = "/nix/store/avwiw7hb1qckag864sc6ixfxr8qmf94w-nix-1.11.13";
i686-linux = "/nix/store/8wv3ms0afw95hzsz4lxzv0nj4w3614z9-nix-1.11.13";
x86_64-darwin = "/nix/store/z21lvakv1l7lhasmv5fvaz8mlzxia8k9-nix-1.11.13";
}

View File

@ -140,7 +140,7 @@ channel_closure="$tmpdir/channel.closure"
nix-store --export $channel_root > $channel_closure
# Populate the target root directory with the basics
@prepare_root@/bin/nixos-prepare-root $mountPoint $channel_root $system_root @nixClosure@ $system_closure $channel_closure
@prepare_root@/bin/nixos-prepare-root "$mountPoint" "$channel_root" "$system_root" @nixClosure@ "$system_closure" "$channel_closure"
# nixos-prepare-root doesn't currently do anything with file ownership, so we set it up here instead
chown @root_uid@:@nixbld_gid@ $mountPoint/nix/store

View File

@ -250,7 +250,7 @@ trap cleanup EXIT
# If --repair is given, don't try to use the Nix daemon, because the
# flag can only be used directly.
if [ -z "$repair" ] && systemctl show nix-daemon.socket nix-daemon.service | grep -q ActiveState=active; then
export NIX_REMOTE=${NIX_REMOTE:-daemon}
export NIX_REMOTE=${NIX_REMOTE-daemon}
fi

View File

@ -139,6 +139,7 @@
btsync = 113;
minecraft = 114;
#monetdb = 115; # unused (not packaged), removed 2016-09-19
vault = 115;
rippled = 116;
murmur = 117;
foundationdb = 118;
@ -166,7 +167,7 @@
dnsmasq = 141;
uhub = 142;
yandexdisk = 143;
collectd = 144;
#collectd = 144; #unused
consul = 145;
mailpile = 146;
redmine = 147;
@ -295,6 +296,7 @@
aria2 = 277;
clickhouse = 278;
rslsync = 279;
minio = 280;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -414,6 +416,7 @@
btsync = 113;
#minecraft = 114; # unused
#monetdb = 115; # unused (not packaged), removed 2016-09-19
vault = 115;
#ripped = 116; # unused
#murmur = 117; # unused
foundationdb = 118;
@ -559,6 +562,7 @@
aria2 = 277;
clickhouse = 278;
rslsync = 279;
minio = 280;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View File

@ -17,7 +17,7 @@ let
# }
merge = loc: defs:
zipAttrs
(flatten (imap (n: def: imap (m: def':
(flatten (imap1 (n: def: imap1 (m: def':
maintainer.merge (loc ++ ["[${toString n}-${toString m}]"])
[{ inherit (def) file; value = def'; }]) def.value) defs));
};

View File

@ -21,6 +21,7 @@
./config/sysctl.nix
./config/system-environment.nix
./config/system-path.nix
./config/terminfo.nix
./config/timezone.nix
./config/unix-odbc-drivers.nix
./config/users-groups.nix
@ -115,6 +116,7 @@
./security/apparmor.nix
./security/apparmor-suid.nix
./security/audit.nix
./security/auditd.nix
./security/ca.nix
./security/chromium-suid-sandbox.nix
./security/dhparams.nix
@ -235,16 +237,18 @@
./services/hardware/udisks2.nix
./services/hardware/upower.nix
./services/hardware/thermald.nix
./services/logging/SystemdJournal2Gelf.nix
./services/logging/awstats.nix
./services/logging/fluentd.nix
./services/logging/graylog.nix
./services/logging/heartbeat.nix
./services/logging/journalbeat.nix
./services/logging/journalwatch.nix
./services/logging/klogd.nix
./services/logging/logcheck.nix
./services/logging/logrotate.nix
./services/logging/logstash.nix
./services/logging/rsyslogd.nix
./services/logging/SystemdJournal2Gelf.nix
./services/logging/syslog-ng.nix
./services/logging/syslogd.nix
./services/mail/dovecot.nix
@ -252,6 +256,7 @@
./services/mail/exim.nix
./services/mail/freepops.nix
./services/mail/mail.nix
./services/mail/mailhog.nix
./services/mail/mlmmj.nix
./services/mail/offlineimap.nix
./services/mail/opendkim.nix
@ -322,6 +327,7 @@
./services/misc/ripple-data-api.nix
./services/misc/rogue.nix
./services/misc/siproxd.nix
./services/misc/snapper.nix
./services/misc/sonarr.nix
./services/misc/spice-vdagentd.nix
./services/misc/ssm-agent.nix
@ -554,12 +560,14 @@
./services/security/tor.nix
./services/security/torify.nix
./services/security/torsocks.nix
./services/security/vault.nix
./services/system/cgmanager.nix
./services/system/cloud-init.nix
./services/system/dbus.nix
./services/system/earlyoom.nix
./services/system/kerberos.nix
./services/system/nscd.nix
./services/system/saslauthd.nix
./services/system/uptimed.nix
./services/torrent/deluge.nix
./services/torrent/flexget.nix
@ -575,6 +583,7 @@
./services/web-apps/frab.nix
./services/web-apps/mattermost.nix
./services/web-apps/nixbot.nix
./services/web-apps/piwik.nix
./services/web-apps/pump.io.nix
./services/web-apps/tt-rss.nix
./services/web-apps/selfoss.nix
@ -584,9 +593,11 @@
./services/web-servers/fcgiwrap.nix
./services/web-servers/jboss/default.nix
./services/web-servers/lighttpd/cgit.nix
./services/web-servers/lighttpd/collectd.nix
./services/web-servers/lighttpd/default.nix
./services/web-servers/lighttpd/gitweb.nix
./services/web-servers/lighttpd/inginious.nix
./services/web-servers/minio.nix
./services/web-servers/nginx/default.nix
./services/web-servers/phpfpm/default.nix
./services/web-servers/shellinabox.nix

View File

@ -41,6 +41,9 @@
# Virtio (QEMU, KVM etc.) support.
"virtio_net" "virtio_pci" "virtio_blk" "virtio_scsi" "virtio_balloon" "virtio_console"
# VMware support.
"mptspi" "vmw_balloon" "vmwgfx" "vmw_vmci" "vmw_vsock_vmci_transport" "vmxnet3" "vsock"
# Hyper-V support.
"hv_storvsc"

View File

@ -55,7 +55,7 @@ with lib;
# same privileges as it would have inside it. This is particularly
# bad in the common case of running as root within the namespace.
#
# Setting the number of allowed userns to 0 effectively disables
# Setting the number of allowed user namespaces to 0 effectively disables
# the feature at runtime. Attempting to create a user namespace
# with unshare will then fail with "no space left on device".
boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0;

View File

@ -6,21 +6,17 @@ with lib;
###### interface
options = {
programs.browserpass = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to install the NativeMessaging configuration for installed browsers.
'';
};
};
programs.browserpass.enable = mkEnableOption "the NativeMessaging configuration for Chromium, Chrome, and Vivaldi.";
};
###### implementation
config = mkIf config.programs.browserpass.enable {
environment.systemPackages = [ pkgs.browserpass ];
environment.etc."chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
environment.etc."opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
environment.etc = {
"chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"chromium/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
"opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"opt/chrome/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
};
};
}

View File

@ -34,7 +34,6 @@ in
{ PATH = [ "/bin" ];
INFOPATH = [ "/info" "/share/info" ];
PKG_CONFIG_PATH = [ "/lib/pkgconfig" ];
TERMINFO_DIRS = [ "/share/terminfo" ];
PERL5LIB = [ "/lib/perl5/site_perl" ];
KDEDIRS = [ "" ];
STRIGI_PLUGIN_PATH = [ "/lib/strigi/" ];
@ -50,9 +49,6 @@ in
environment.extraInit =
''
# reset TERM with new TERMINFO available (if any)
export TERM=$TERM
unset ASPELL_CONF
for i in ${concatStringsSep " " (reverseList cfg.profiles)} ; do
if [ -d "$i/lib/aspell" ]; then

View File

@ -55,79 +55,24 @@ in
};
config = mkIf cfg.agent.enable {
systemd.user.services.gpg-agent = {
serviceConfig = {
ExecStart = [
""
("${pkgs.gnupg}/bin/gpg-agent --supervised "
+ optionalString cfg.agent.enableSSHSupport "--enable-ssh-support")
];
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload gpg-agent";
};
};
systemd.user.sockets.gpg-agent = {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent" ];
socketConfig = {
FileDescriptorName = "std";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.ssh" ];
socketConfig = {
FileDescriptorName = "ssh";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-extra = mkIf cfg.agent.enableExtraSocket {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.extra" ];
socketConfig = {
FileDescriptorName = "extra";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-browser = mkIf cfg.agent.enableBrowserSocket {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.browser" ];
socketConfig = {
FileDescriptorName = "browser";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.services.dirmngr = {
requires = [ "dirmngr.socket" ];
after = [ "dirmngr.socket" ];
unitConfig = {
RefuseManualStart = "true";
};
serviceConfig = {
ExecStart = "${pkgs.gnupg}/bin/dirmngr --supervised";
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload dirmngr";
};
};
systemd.user.sockets.dirmngr = {
systemd.user.sockets.dirmngr = mkIf cfg.dirmngr.enable {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.dirmngr" ];
socketConfig = {
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.packages = [ pkgs.gnupg ];

View File

@ -0,0 +1,37 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.nylas-mail;
defaultUser = "nylas-mail";
in {
###### interface
options = {
services.nylas-mail = {
enable = mkEnableOption ''
nylas-mail - Open-source mail client built on the modern web with Electron, React, and Flux
'';
gnome3-keyring = mkOption {
type = types.bool;
default = true;
description = "Enable gnome3 keyring for nylas-mail.";
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.nylas-mail-bin ];
services.gnome3.gnome-keyring = mkIf cfg.gnome3-keyring {
enable = true;
};
};
}

View File

@ -42,6 +42,10 @@ in
};
config = mkIf cfg.enable {
# Prevent zsh from overwriting oh-my-zsh's prompt
programs.zsh.promptInit = mkDefault "";
environment.systemPackages = with pkgs; [ oh-my-zsh ];
programs.zsh.interactiveShellInit = with pkgs; with builtins; ''

View File

@ -97,45 +97,6 @@ in
config = mkIf cfg.enable {
programs.zsh = {
shellInit = ''
. ${config.system.build.setEnvironment}
${cfge.shellInit}
'';
loginShellInit = cfge.loginShellInit;
interactiveShellInit = ''
# history defaults
SAVEHIST=2000
HISTSIZE=2000
HISTFILE=$HOME/.zsh_history
setopt HIST_IGNORE_DUPS SHARE_HISTORY HIST_FCNTL_LOCK
# Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions)
done
${if cfg.enableCompletion then "autoload -U compinit && compinit" else ""}
${optionalString (cfg.enableAutosuggestions)
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
${zshAliases}
${cfg.promptInit}
${cfge.interactiveShellInit}
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
'';
};
environment.etc."zshenv".text =
''
# /etc/zshenv: DO NOT EDIT -- this file has been generated automatically.
@ -146,6 +107,10 @@ in
if [ -n "$__ETC_ZSHENV_SOURCED" ]; then return; fi
export __ETC_ZSHENV_SOURCED=1
. ${config.system.build.setEnvironment}
${cfge.shellInit}
${cfg.shellInit}
# Read system-wide modifications.
@ -163,6 +128,8 @@ in
if [ -n "$__ETC_ZPROFILE_SOURCED" ]; then return; fi
__ETC_ZPROFILE_SOURCED=1
${cfge.loginShellInit}
${cfg.loginShellInit}
# Read system-wide modifications.
@ -182,8 +149,34 @@ in
. /etc/zinputrc
# history defaults
SAVEHIST=2000
HISTSIZE=2000
HISTFILE=$HOME/.zsh_history
setopt HIST_IGNORE_DUPS SHARE_HISTORY HIST_FCNTL_LOCK
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
${optionalString cfg.enableCompletion "autoload -U compinit && compinit"}
${optionalString (cfg.enableAutosuggestions)
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
${zshAliases}
${cfge.interactiveShellInit}
${cfg.interactiveShellInit}
${cfg.promptInit}
# Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions $p/share/zsh/vendor-completions)
done
# Read system-wide modifications.
if test -f /etc/zshrc.local; then
. /etc/zshrc.local

View File

@ -0,0 +1,26 @@
{ config, lib, pkgs, ... }:
with lib;
{
options.security.auditd.enable = mkEnableOption "the Linux Audit daemon";
config = mkIf config.security.auditd.enable {
systemd.services.auditd = {
description = "Linux Audit daemon";
wantedBy = [ "basic.target" ];
unitConfig = {
ConditionVirtualization = "!container";
ConditionSecurity = [ "audit" ];
};
path = [ pkgs.audit ];
serviceConfig = {
ExecStartPre="${pkgs.coreutils}/bin/mkdir -p /var/log/audit";
ExecStart = "${pkgs.audit}/bin/auditd -l -n -s nochange";
};
};
};
}

View File

@ -66,10 +66,6 @@ in
# Don't edit this file. Set the NixOS options security.sudo.configFile
# or security.sudo.extraConfig instead.
# Environment variables to keep for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
# Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic.
Defaults env_keep+=SSH_AUTH_SOCK

View File

@ -171,7 +171,7 @@ in
###### setcap activation script
system.activationScripts.wrappers =
lib.stringAfter [ "users" ]
lib.stringAfter [ "specialfs" "users" ]
''
# Look in the system path and in the default profile for
# programs to be wrapped.

View File

@ -7,6 +7,8 @@ let
inherit (pkgs) alsaUtils;
pulseaudioEnabled = config.hardware.pulseaudio.enable;
in
{
@ -80,7 +82,7 @@ in
environment.systemPackages = [ alsaUtils ];
environment.etc = mkIf (config.sound.extraConfig != "")
environment.etc = mkIf (!pulseaudioEnabled && config.sound.extraConfig != "")
[
{ source = pkgs.writeText "asound.conf" config.sound.extraConfig;
target = "asound.conf";

View File

@ -10,9 +10,11 @@ let
gid = config.ids.gids.mpd;
cfg = config.services.mpd;
playlistDir = "${cfg.dataDir}/playlists";
mpdConf = pkgs.writeText "mpd.conf" ''
music_directory "${cfg.musicDirectory}"
playlist_directory "${cfg.dataDir}/playlists"
playlist_directory "${playlistDir}"
db_file "${cfg.dbFile}"
state_file "${cfg.dataDir}/state"
sticker_file "${cfg.dataDir}/sticker.sql"
@ -42,6 +44,16 @@ in {
'';
};
startWhenNeeded = mkOption {
type = types.bool;
default = false;
description = ''
If set, <command>mpd</command> is socket-activated; that
is, instead of having it permanently running as a daemon,
systemd will start it on the first incoming connection.
'';
};
musicDirectory = mkOption {
type = types.path;
default = "${cfg.dataDir}/music";
@ -121,16 +133,42 @@ in {
config = mkIf cfg.enable {
systemd.sockets.mpd = mkIf cfg.startWhenNeeded {
description = "Music Player Daemon Socket";
wantedBy = [ "sockets.target" ];
listenStreams = [
"${optionalString (cfg.network.listenAddress != "any") "${cfg.network.listenAddress}:"}${toString cfg.network.port}"
];
socketConfig = {
Backlog = 5;
KeepAlive = true;
PassCredentials = true;
};
};
systemd.services.mpd = {
after = [ "network.target" "sound.target" ];
description = "Music Player Daemon";
wantedBy = [ "multi-user.target" ];
wantedBy = optional (!cfg.startWhenNeeded) "multi-user.target";
preStart = "mkdir -p ${cfg.dataDir} && chown -R ${cfg.user}:${cfg.group} ${cfg.dataDir}";
preStart = ''
mkdir -p "${cfg.dataDir}" && chown -R ${cfg.user}:${cfg.group} "${cfg.dataDir}"
mkdir -p "${playlistDir}" && chown -R ${cfg.user}:${cfg.group} "${playlistDir}"
'';
serviceConfig = {
User = "${cfg.user}";
PermissionsStartOnly = true;
ExecStart = "${pkgs.mpd}/bin/mpd --no-daemon ${mpdConf}";
Type = "notify";
LimitRTPRIO = 50;
LimitRTTIME = "infinity";
ProtectSystem = true;
NoNewPrivileges = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
ProtectKernelModules = true;
RestrictAddressFamilies = "AF_INET AF_INET6 AF_UNIX AF_NETLINK";
RestrictNamespaces = true;
};
};

View File

@ -44,7 +44,7 @@ let
cniConfig = pkgs.buildEnv {
name = "kubernetes-cni-config";
paths = imap (i: entry:
paths = imap1 (i: entry:
pkgs.writeTextDir "${toString (10+i)}-${entry.type}.conf" (builtins.toJSON entry)
) cfg.kubelet.cni.config;
};

View File

@ -36,9 +36,9 @@ in
package = mkOption {
type = types.package;
default = pkgs.slurm-llnl;
defaultText = "pkgs.slurm-llnl";
example = literalExample "pkgs.slurm-llnl-full";
default = pkgs.slurm;
defaultText = "pkgs.slurm";
example = literalExample "pkgs.slurm-full";
description = ''
The package to use for slurm binaries.
'';

View File

@ -225,11 +225,7 @@ in {
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.home;
Environment = "PYTHONPATH=${cfg.package}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.www}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.waterfall-view}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.console-view}/lib/python2.7/site-packages:${pkgs.python27Packages.future}/lib/python2.7/site-packages:${pkgs.python27Packages.dateutil}/lib/python2.7/site-packages:${pkgs.python27Packages.six}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlalchemy}/lib/python2.7/site-packages:${pkgs.python27Packages.jinja2}/lib/python2.7/site-packages:${pkgs.python27Packages.markupsafe}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlalchemy_migrate}/lib/python2.7/site-packages:${pkgs.python27Packages.tempita}/lib/python2.7/site-packages:${pkgs.python27Packages.decorator}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlparse}/lib/python2.7/site-packages:${pkgs.python27Packages.txaio}/lib/python2.7/site-packages:${pkgs.python27Packages.autobahn}/lib/python2.7/site-packages:${pkgs.python27Packages.pyjwt}/lib/python2.7/site-packages:${pkgs.python27Packages.distro}/lib/python2.7/site-packages:${pkgs.python27Packages.pbr}/lib/python2.7/site-packages:${pkgs.python27Packages.urllib3}/lib/python2.7/site-packages";
# NOTE: call twistd directly with stdout logging for systemd
#ExecStart = "${cfg.package}/bin/buildbot start --nodaemon ${cfg.buildbotDir}";
ExecStart = "${pkgs.python27Packages.twisted}/bin/twistd -n -l - -y ${cfg.buildbotDir}/buildbot.tac";
ExecStart = "${cfg.package}/bin/buildbot start --nodaemon ${cfg.buildbotDir}";
};
};

View File

@ -15,6 +15,23 @@ in
description = "Verbatim config.toml to use";
};
gracefulTermination = mkOption {
default = false;
type = types.bool;
description = ''
Finish all remaining jobs before stopping, restarting or reconfiguring.
If not set gitlab-runner will stop immediatly without waiting for jobs to finish,
which will lead to failed builds.
'';
};
gracefulTimeout = mkOption {
default = "infinity";
type = types.str;
example = "5min 20s";
description = ''Time to wait until a graceful shutdown is turned into a forceful one.'';
};
workDir = mkOption {
default = "/var/lib/gitlab-runner";
type = types.path;
@ -45,6 +62,11 @@ in
--service gitlab-runner \
--user gitlab-runner \
'';
} // optionalAttrs (cfg.gracefulTermination) {
TimeoutStopSec = "${cfg.gracefulTimeout}";
KillSignal = "SIGQUIT";
KillMode = "process";
};
};

View File

@ -308,6 +308,7 @@ in
requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ];
environment = serverEnv;
restartTriggers = [ hydraConf ];
serviceConfig =
{ ExecStart =
"@${cfg.package}/bin/hydra-server hydra-server -f -h '${cfg.listenHost}' "
@ -324,6 +325,7 @@ in
requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
IN_SYSTEMD = "1"; # to get log severity levels
@ -344,7 +346,8 @@ in
{ wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools ];
path = with pkgs; [ cfg.package nettools jq ];
restartTriggers = [ hydraConf ];
environment = env;
serviceConfig =
{ ExecStart = "@${cfg.package}/bin/hydra-evaluator hydra-evaluator";

View File

@ -20,6 +20,7 @@ let
''
[mysqld]
port = ${toString cfg.port}
${optionalString (cfg.bind != null) "bind-address = ${cfg.bind}" }
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "log-bin=mysql-bin"}
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "server-id = ${toString cfg.replication.serverId}"}
${optionalString (cfg.replication.role == "slave" && !atLeast55)
@ -58,6 +59,13 @@ in
";
};
bind = mkOption {
type = types.nullOr types.str;
default = null;
example = literalExample "0.0.0.0";
description = "Address to bind to. The default it to bind to all addresses";
};
port = mkOption {
type = types.int;
default = 3306;

View File

@ -0,0 +1,110 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.rethinkdb;
rethinkdb = cfg.package;
in
{
###### interface
options = {
services.rethinkdb = {
enable = mkOption {
default = false;
description = "Whether to enable the RethinkDB server.";
};
#package = mkOption {
# default = pkgs.rethinkdb;
# description = "Which RethinkDB derivation to use.";
#};
user = mkOption {
default = "rethinkdb";
description = "User account under which RethinkDB runs.";
};
group = mkOption {
default = "rethinkdb";
description = "Group which rethinkdb user belongs to.";
};
dbpath = mkOption {
default = "/var/db/rethinkdb";
description = "Location where RethinkDB stores its data, 1 data directory per instance.";
};
pidpath = mkOption {
default = "/var/run/rethinkdb";
description = "Location where each instance's pid file is located.";
};
#cfgpath = mkOption {
# default = "/etc/rethinkdb/instances.d";
# description = "Location where RethinkDB stores it config files, 1 config file per instance.";
#};
# TODO: currently not used by our implementation.
#instances = mkOption {
# type = types.attrsOf types.str;
# default = {};
# description = "List of named RethinkDB instances in our cluster.";
#};
};
};
###### implementation
config = mkIf config.services.rethinkdb.enable {
environment.systemPackages = [ rethinkdb ];
systemd.services.rethinkdb = {
description = "RethinkDB server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
# TODO: abstract away 'default', which is a per-instance directory name
# allowing end user of this nix module to provide multiple instances,
# and associated directory per instance
ExecStart = "${rethinkdb}/bin/rethinkdb -d ${cfg.dbpath}/default";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = cfg.user;
Group = cfg.group;
PIDFile = "${cfg.pidpath}/default.pid";
PermissionsStartOnly = true;
};
preStart = ''
if ! test -e ${cfg.dbpath}; then
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}/default
chown -R ${cfg.user}:${cfg.group} ${cfg.dbpath}
fi
if ! test -e "${cfg.pidpath}/default.pid"; then
install -D -o ${cfg.user} -g ${cfg.group} /dev/null "${cfg.pidpath}/default.pid"
fi
'';
};
users.extraUsers.rethinkdb = mkIf (cfg.user == "rethinkdb")
{ name = "rethinkdb";
description = "RethinkDB server user";
};
users.extraGroups = optionalAttrs (cfg.group == "rethinkdb") (singleton
{ name = "rethinkdb";
});
};
}

View File

@ -39,7 +39,7 @@ let
admins = [];
};
serverSettingsFile = pkgs.writeText "server-settings.json" (builtins.toJSON (filterAttrsRecursive (n: v: v != null) serverSettings));
modDir = pkgs.factorio-mkModDirDrv cfg.mods;
modDir = pkgs.factorio-utils.mkModDirDrv cfg.mods;
in
{
options = {

View File

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.heartbeat;
heartbeatYml = pkgs.writeText "heartbeat.yml" ''
name: ${cfg.name}
tags: ${builtins.toJSON cfg.tags}
${cfg.extraConfig}
'';
in
{
options = {
services.heartbeat = {
enable = mkEnableOption "heartbeat";
name = mkOption {
type = types.str;
default = "heartbeat";
description = "Name of the beat";
};
tags = mkOption {
type = types.listOf types.str;
default = [];
description = "Tags to place on the shipped log messages";
};
stateDir = mkOption {
type = types.str;
default = "/var/lib/heartbeat";
description = "The state directory. heartbeat's own logs and other data are stored here.";
};
extraConfig = mkOption {
type = types.lines;
default = ''
heartbeat.monitors:
- type: http
urls: ["http://localhost:9200"]
schedule: '@every 10s'
'';
description = "Any other configuration options you want to add";
};
};
};
config = mkIf cfg.enable {
systemd.services.heartbeat = with pkgs; {
description = "heartbeat log shipper";
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -p "${cfg.stateDir}"/{data,logs}
chown nobody:nogroup "${cfg.stateDir}"/{data,logs}
'';
serviceConfig = {
User = "nobody";
PermissionsStartOnly = true;
AmbientCapabilities = "cap_net_raw";
ExecStart = "${pkgs.heartbeat}/bin/heartbeat -c \"${heartbeatYml}\" -path.data \"${cfg.stateDir}/data\" -path.logs \"${cfg.stateDir}/logs\"";
};
};
};
}

View File

@ -0,0 +1,246 @@
{ config, lib, pkgs, services, ... }:
with lib;
let
cfg = config.services.journalwatch;
user = "journalwatch";
dataDir = "/var/lib/${user}";
journalwatchConfig = pkgs.writeText "config" (''
# (File Generated by NixOS journalwatch module.)
[DEFAULT]
mail_binary = ${cfg.mailBinary}
priority = ${toString cfg.priority}
mail_from = ${cfg.mailFrom}
''
+ optionalString (cfg.mailTo != null) ''
mail_to = ${cfg.mailTo}
''
+ cfg.extraConfig);
journalwatchPatterns = pkgs.writeText "patterns" ''
# (File Generated by NixOS journalwatch module.)
${mkPatterns cfg.filterBlocks}
'';
# empty line at the end needed to to separate the blocks
mkPatterns = filterBlocks: concatStringsSep "\n" (map (block: ''
${block.match}
${block.filters}
'') filterBlocks);
in {
options = {
services.journalwatch = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
If enabled, periodically check the journal with journalwatch and report the results by mail.
'';
};
priority = mkOption {
type = types.int;
default = 6;
description = ''
Lowest priority of message to be considered.
A value between 7 ("debug"), and 0 ("emerg"). Defaults to 6 ("info").
If you don't care about anything with "info" priority, you can reduce
this to e.g. 5 ("notice") to considerably reduce the amount of
messages without needing many <option>filterBlocks</option>.
'';
};
# HACK: this is a workaround for journalwatch's usage of socket.getfqdn() which always returns localhost if
# there's an alias for the localhost on a separate line in /etc/hosts, or take for ages if it's not present and
# then return something right-ish in the direction of /etc/hostname. Just bypass it completely.
mailFrom = mkOption {
type = types.str;
default = "journalwatch@${config.networking.hostName}";
description = ''
Mail address to send journalwatch reports from.
'';
};
mailTo = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Mail address to send journalwatch reports to.
'';
};
mailBinary = mkOption {
type = types.path;
default = "/run/wrappers/bin/sendmail";
description = ''
Sendmail-compatible binary to be used to send the messages.
'';
};
extraConfig = mkOption {
type = types.str;
default = "";
description = ''
Extra lines to be added verbatim to the journalwatch/config configuration file.
You can add any commandline argument to the config, without the '--'.
See <literal>journalwatch --help</literal> for all arguments and their description.
'';
};
filterBlocks = mkOption {
type = types.listOf (types.submodule {
options = {
match = mkOption {
type = types.str;
example = "SYSLOG_IDENTIFIER = systemd";
description = ''
Syntax: <literal>field = value</literal>
Specifies the log entry <literal>field</literal> this block should apply to.
If the <literal>field</literal> of a message matches this <literal>value</literal>,
this patternBlock's <option>filters</option> are applied.
If <literal>value</literal> starts and ends with a slash, it is interpreted as
an extended python regular expression, if not, it's an exact match.
The journal fields are explained in systemd.journal-fields(7).
'';
};
filters = mkOption {
type = types.str;
example = ''
(Stopped|Stopping|Starting|Started) .*
(Reached target|Stopped target) .*
'';
description = ''
The filters to apply on all messages which satisfy <option>match</option>.
Any of those messages that match any specified filter will be removed from journalwatch's output.
Each filter is an extended Python regular expression.
You can specify multiple filters and separate them by newlines.
Lines starting with '#' are comments. Inline-comments are not permitted.
'';
};
};
});
example = [
# examples taken from upstream
{
match = "_SYSTEMD_UNIT = systemd-logind.service";
filters = ''
New session [a-z]?\d+ of user \w+\.
Removed session [a-z]?\d+\.
'';
}
{
match = "SYSLOG_IDENTIFIER = /(CROND|crond)/";
filters = ''
pam_unix\(crond:session\): session (opened|closed) for user \w+
\(\w+\) CMD .*
'';
}
];
# another example from upstream.
# very useful on priority = 6, and required as journalwatch throws an error when no pattern is defined at all.
default = [
{
match = "SYSLOG_IDENTIFIER = systemd";
filters = ''
(Stopped|Stopping|Starting|Started) .*
(Created slice|Removed slice) user-\d*\.slice\.
Received SIGRTMIN\+24 from PID .*
(Reached target|Stopped target) .*
Startup finished in \d*ms\.
'';
}
];
description = ''
filterBlocks can be defined to blacklist journal messages which are not errors.
Each block matches on a log entry field, and the filters in that block then are matched
against all messages with a matching log entry field.
All messages whose PRIORITY is at least 6 (INFO) are processed by journalwatch.
If you don't specify any filterBlocks, PRIORITY is reduced to 5 (NOTICE) by default.
All regular expressions are extended Python regular expressions, for details
see: http://doc.pyschools.com/html/regex.html
'';
};
interval = mkOption {
type = types.str;
default = "hourly";
description = ''
How often to run journalwatch.
The format is described in systemd.time(7).
'';
};
accuracy = mkOption {
type = types.str;
default = "10min";
description = ''
The time window around the interval in which the journalwatch run will be scheduled.
The format is described in systemd.time(7).
'';
};
};
};
config = mkIf cfg.enable {
users.extraUsers.${user} = {
isSystemUser = true;
createHome = true;
home = dataDir;
# for journal access
group = "systemd-journal";
};
systemd.services.journalwatch = {
environment = {
XDG_DATA_HOME = "${dataDir}/share";
XDG_CONFIG_HOME = "${dataDir}/config";
};
serviceConfig = {
User = user;
Type = "oneshot";
PermissionsStartOnly = true;
ExecStart = "${pkgs.python3Packages.journalwatch}/bin/journalwatch mail";
# lowest CPU and IO priority, but both still in best-effort class to prevent starvation
Nice=19;
IOSchedulingPriority=7;
};
preStart = ''
chown -R ${user}:systemd-journal ${dataDir}
chmod -R u+rwX,go-w ${dataDir}
mkdir -p ${dataDir}/config/journalwatch
ln -sf ${journalwatchConfig} ${dataDir}/config/journalwatch/config
ln -sf ${journalwatchPatterns} ${dataDir}/config/journalwatch/patterns
'';
};
systemd.timers.journalwatch = {
description = "Periodic journalwatch run";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = cfg.interval;
AccuracySec = cfg.accuracy;
Persistent = true;
};
};
};
meta = {
maintainers = with stdenv.lib.maintainers; [ florianjacob ];
};
}

View File

@ -0,0 +1,43 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mailhog;
in {
###### interface
options = {
services.mailhog = {
enable = mkEnableOption "MailHog";
user = mkOption {
type = types.str;
default = "mailhog";
description = "User account under which mailhog runs.";
};
};
};
###### implementation
config = mkIf cfg.enable {
users.extraUsers.mailhog = {
name = cfg.user;
description = "MailHog service user";
};
systemd.services.mailhog = {
description = "MailHog service";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${pkgs.mailhog}/bin/MailHog";
User = cfg.user;
};
};
};
}

View File

@ -9,7 +9,8 @@ let
group = cfg.group;
setgidGroup = cfg.setgidGroup;
haveAliases = cfg.postmasterAlias != "" || cfg.rootAlias != "" || cfg.extraAliases != "";
haveAliases = cfg.postmasterAlias != "" || cfg.rootAlias != ""
|| cfg.extraAliases != "";
haveTransport = cfg.transport != "";
haveVirtual = cfg.virtual != "";
@ -25,149 +26,275 @@ let
clientRestrictions = concatStringsSep ", " (clientAccess ++ dnsBl);
mainCf =
''
compatibility_level = 9999
mainCf = let
escape = replaceStrings ["$"] ["$$"];
mkList = items: "\n " + concatMapStringsSep "\n " escape items;
mkVal = value:
if isList value then mkList value
else " " + (if value == true then "yes"
else if value == false then "no"
else toString value);
mkEntry = name: value: "${escape name} =${mkVal value}";
in
concatStringsSep "\n" (mapAttrsToList mkEntry (recursiveUpdate defaultConf cfg.config))
+ "\n" + cfg.extraConfig;
mail_owner = ${user}
default_privs = nobody
defaultConf = {
compatibility_level = "9999";
mail_owner = user;
default_privs = "nobody";
# NixOS specific locations
data_directory = /var/lib/postfix/data
queue_directory = /var/lib/postfix/queue
# NixOS specific locations
data_directory = "/var/lib/postfix/data";
queue_directory = "/var/lib/postfix/queue";
# Default location of everything in package
meta_directory = ${pkgs.postfix}/etc/postfix
command_directory = ${pkgs.postfix}/bin
sample_directory = /etc/postfix
newaliases_path = ${pkgs.postfix}/bin/newaliases
mailq_path = ${pkgs.postfix}/bin/mailq
readme_directory = no
sendmail_path = ${pkgs.postfix}/bin/sendmail
daemon_directory = ${pkgs.postfix}/libexec/postfix
manpage_directory = ${pkgs.postfix}/share/man
html_directory = ${pkgs.postfix}/share/postfix/doc/html
shlib_directory = no
# Default location of everything in package
meta_directory = "${pkgs.postfix}/etc/postfix";
command_directory = "${pkgs.postfix}/bin";
sample_directory = "/etc/postfix";
newaliases_path = "${pkgs.postfix}/bin/newaliases";
mailq_path = "${pkgs.postfix}/bin/mailq";
readme_directory = false;
sendmail_path = "${pkgs.postfix}/bin/sendmail";
daemon_directory = "${pkgs.postfix}/libexec/postfix";
manpage_directory = "${pkgs.postfix}/share/man";
html_directory = "${pkgs.postfix}/share/postfix/doc/html";
shlib_directory = false;
relayhost = if cfg.lookupMX || cfg.relayHost == ""
then cfg.relayHost
else "[${cfg.relayHost}]";
mail_spool_directory = "/var/spool/mail/";
setgid_group = setgidGroup;
}
// optionalAttrs config.networking.enableIPv6 { inet_protocols = "all"; }
// optionalAttrs (cfg.networks != null) { mynetworks = cfg.networks; }
// optionalAttrs (cfg.networksStyle != "") { mynetworks_style = cfg.networksStyle; }
// optionalAttrs (cfg.hostname != "") { myhostname = cfg.hostname; }
// optionalAttrs (cfg.domain != "") { mydomain = cfg.domain; }
// optionalAttrs (cfg.origin != "") { myorigin = cfg.origin; }
// optionalAttrs (cfg.destination != null) { mydestination = cfg.destination; }
// optionalAttrs (cfg.relayDomains != null) { relay_domains = cfg.relayDomains; }
// optionalAttrs (cfg.recipientDelimiter != "") { recipient_delimiter = cfg.recipientDelimiter; }
// optionalAttrs haveAliases { alias_maps = "${cfg.aliasMapType}:/etc/postfix/aliases"; }
// optionalAttrs haveTransport { transport_maps = "hash:/etc/postfx/transport"; }
// optionalAttrs haveVirtual { virtual_alias_maps = "${cfg.virtualMapType}:/etc/postfix/virtual"; }
// optionalAttrs (cfg.dnsBlacklists != []) { smtpd_client_restrictions = clientRestrictions; }
// optionalAttrs cfg.enableHeaderChecks { header_checks = "regexp:/etc/postfix/header_checks"; }
// optionalAttrs (cfg.sslCert != "") {
smtp_tls_CAfile = cfg.sslCACert;
smtp_tls_cert_file = cfg.sslCert;
smtp_tls_key_file = cfg.sslKey;
''
+ optionalString config.networking.enableIPv6 ''
inet_protocols = all
''
+ (if cfg.networks != null then
''
mynetworks = ${concatStringsSep ", " cfg.networks}
''
else if cfg.networksStyle != "" then
''
mynetworks_style = ${cfg.networksStyle}
''
else
"")
+ optionalString (cfg.hostname != "") ''
myhostname = ${cfg.hostname}
''
+ optionalString (cfg.domain != "") ''
mydomain = ${cfg.domain}
''
+ optionalString (cfg.origin != "") ''
myorigin = ${cfg.origin}
''
+ optionalString (cfg.destination != null) ''
mydestination = ${concatStringsSep ", " cfg.destination}
''
+ optionalString (cfg.relayDomains != null) ''
relay_domains = ${concatStringsSep ", " cfg.relayDomains}
''
+ ''
relayhost = ${if cfg.lookupMX || cfg.relayHost == "" then
cfg.relayHost
else
"[" + cfg.relayHost + "]"}
smtp_use_tls = true;
mail_spool_directory = /var/spool/mail/
smtpd_tls_CAfile = cfg.sslCACert;
smtpd_tls_cert_file = cfg.sslCert;
smtpd_tls_key_file = cfg.sslKey;
setgid_group = ${setgidGroup}
''
+ optionalString (cfg.sslCert != "") ''
smtpd_use_tls = true;
};
smtp_tls_CAfile = ${cfg.sslCACert}
smtp_tls_cert_file = ${cfg.sslCert}
smtp_tls_key_file = ${cfg.sslKey}
masterCfOptions = { options, config, name, ... }: {
options = {
name = mkOption {
type = types.str;
default = name;
example = "smtp";
description = ''
The name of the service to run. Defaults to the attribute set key.
'';
};
smtp_use_tls = yes
type = mkOption {
type = types.enum [ "inet" "unix" "fifo" "pass" ];
default = "unix";
example = "inet";
description = "The type of the service";
};
smtpd_tls_CAfile = ${cfg.sslCACert}
smtpd_tls_cert_file = ${cfg.sslCert}
smtpd_tls_key_file = ${cfg.sslKey}
private = mkOption {
type = types.bool;
example = false;
description = ''
Whether the service's sockets and storage directory is restricted to
be only available via the mail system. If <literal>null</literal> is
given it uses the postfix default <literal>true</literal>.
'';
};
smtpd_use_tls = yes
''
+ optionalString (cfg.recipientDelimiter != "") ''
recipient_delimiter = ${cfg.recipientDelimiter}
''
+ optionalString haveAliases ''
alias_maps = hash:/etc/postfix/aliases
''
+ optionalString haveTransport ''
transport_maps = hash:/etc/postfix/transport
''
+ optionalString haveVirtual ''
virtual_alias_maps = hash:/etc/postfix/virtual
''
+ optionalString (cfg.dnsBlacklists != []) ''
smtpd_client_restrictions = ${clientRestrictions}
''
+ cfg.extraConfig;
privileged = mkOption {
type = types.bool;
example = true;
description = "";
};
masterCf = ''
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (no) (never) (100)
# ==========================================================================
smtp inet n - n - - smtpd
'' + optionalString cfg.enableSubmission ''
submission inet n - n - - smtpd
${concatStringsSep "\n " (mapAttrsToList (x: y: "-o " + x + "=" + y) cfg.submissionOptions)}
''
+ ''
pickup unix n - n 60 1 pickup
cleanup unix n - n - 0 cleanup
qmgr unix n - n 300 1 qmgr
tlsmgr unix - - n 1000? 1 tlsmgr
rewrite unix - - n - - trivial-rewrite
bounce unix - - n - 0 bounce
defer unix - - n - 0 bounce
trace unix - - n - 0 bounce
verify unix - - n - 1 verify
flush unix n - n 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
''
+ optionalString cfg.enableSmtp ''
smtp unix - - n - - smtp
relay unix - - n - - smtp
-o smtp_fallback_relay=
# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
''
+ ''
showq unix n - n - - showq
error unix - - n - - error
retry unix - - n - - error
discard unix - - n - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - n - - lmtp
anvil unix - - n - 1 anvil
scache unix - - n - 1 scache
${cfg.extraMasterConf}
'';
chroot = mkOption {
type = types.bool;
example = true;
description = ''
Whether the service is chrooted to have only access to the
<option>services.postfix.queueDir</option> and the closure of
store paths specified by the <option>program</option> option.
'';
};
aliases =
wakeup = mkOption {
type = types.int;
example = 60;
description = ''
Automatically wake up the service after the specified number of
seconds. If <literal>0</literal> is given, never wake the service
up.
'';
};
wakeupUnusedComponent = mkOption {
type = types.bool;
example = false;
description = ''
If set to <literal>false</literal> the component will only be woken
up if it is used. This is equivalent to postfix' notion of adding a
question mark behind the wakeup time in
<filename>master.cf</filename>
'';
};
maxproc = mkOption {
type = types.int;
example = 1;
description = ''
The maximum number of processes to spawn for this service. If the
value is <literal>0</literal> it doesn't have any limit. If
<literal>null</literal> is given it uses the postfix default of
<literal>100</literal>.
'';
};
command = mkOption {
type = types.str;
default = name;
example = "smtpd";
description = ''
A program name specifying a Postfix service/daemon process.
By default it's the attribute <option>name</option>.
'';
};
args = mkOption {
type = types.listOf types.str;
default = [];
example = [ "-o" "smtp_helo_timeout=5" ];
description = ''
Arguments to pass to the <option>command</option>. There is no shell
processing involved and shell syntax is passed verbatim to the
process.
'';
};
rawEntry = mkOption {
type = types.listOf types.str;
default = [];
internal = true;
description = ''
The raw configuration line for the <filename>master.cf</filename>.
'';
};
};
config.rawEntry = let
mkBool = bool: if bool then "y" else "n";
mkArg = arg: "${optionalString (hasPrefix "-" arg) "\n "}${arg}";
maybeOption = fun: option:
if options.${option}.isDefined then fun config.${option} else "-";
# This is special, because we have two options for this value.
wakeup = let
wakeupDefined = options.wakeup.isDefined;
wakeupUCDefined = options.wakeupUnusedComponent.isDefined;
finalValue = toString config.wakeup
+ optionalString (!config.wakeupUnusedComponent) "?";
in if wakeupDefined && wakeupUCDefined then finalValue else "-";
in [
config.name
config.type
(maybeOption mkBool "private")
(maybeOption (b: mkBool (!b)) "privileged")
(maybeOption mkBool "chroot")
wakeup
(maybeOption toString "maxproc")
(config.command + " " + concatMapStringsSep " " mkArg config.args)
];
};
masterCfContent = let
labels = [
"# service" "type" "private" "unpriv" "chroot" "wakeup" "maxproc"
"command + args"
];
labelDefaults = [
"# " "" "(yes)" "(yes)" "(no)" "(never)" "(100)" "" ""
];
masterCf = mapAttrsToList (const (getAttr "rawEntry")) cfg.masterConfig;
# A list of the maximum width of the columns across all lines and labels
maxWidths = let
foldLine = line: acc: let
columnLengths = map stringLength line;
in zipListsWith max acc columnLengths;
# We need to handle the last column specially here, because it's
# open-ended (command + args).
lines = [ labels labelDefaults ] ++ (map (l: init l ++ [""]) masterCf);
in fold foldLine (genList (const 0) (length labels)) lines;
# Pad a string with spaces from the right (opposite of fixedWidthString).
pad = width: str: let
padWidth = width - stringLength str;
padding = concatStrings (genList (const " ") padWidth);
in str + optionalString (padWidth > 0) padding;
# It's + 2 here, because that's the amount of spacing between columns.
fullWidth = fold (width: acc: acc + width + 2) 0 maxWidths;
formatLine = line: concatStringsSep " " (zipListsWith pad maxWidths line);
formattedLabels = let
sep = "# " + concatStrings (genList (const "=") (fullWidth + 5));
lines = [ sep (formatLine labels) (formatLine labelDefaults) sep ];
in concatStringsSep "\n" lines;
in formattedLabels + "\n" + concatMapStringsSep "\n" formatLine masterCf + "\n";
headerCheckOptions = { ... }:
{
options = {
pattern = mkOption {
type = types.str;
default = "/^.*/";
example = "/^X-Mailer:/";
description = "A regexp pattern matching the header";
};
action = mkOption {
type = types.str;
default = "DUNNO";
example = "BCC mail@example.com";
description = "The action to be executed when the pattern is matched";
};
};
};
headerChecks = concatStringsSep "\n" (map (x: "${x.pattern} ${x.action}") cfg.headerChecks) + cfg.extraHeaderChecks;
aliases = let seperator = if cfg.aliasMapType == "hash" then ":" else ""; in
optionalString (cfg.postmasterAlias != "") ''
postmaster: ${cfg.postmasterAlias}
postmaster${seperator} ${cfg.postmasterAlias}
''
+ optionalString (cfg.rootAlias != "") ''
root: ${cfg.rootAlias}
root${seperator} ${cfg.rootAlias}
''
+ cfg.extraAliases
;
@ -176,8 +303,9 @@ let
virtualFile = pkgs.writeText "postfix-virtual" cfg.virtual;
checkClientAccessFile = pkgs.writeText "postfix-check-client-access" cfg.dnsBlacklistOverrides;
mainCfFile = pkgs.writeText "postfix-main.cf" mainCf;
masterCfFile = pkgs.writeText "postfix-master.cf" masterCf;
masterCfFile = pkgs.writeText "postfix-master.cf" masterCfContent;
transportFile = pkgs.writeText "postfix-transport" cfg.transport;
headerChecksFile = pkgs.writeText "postfix-header-checks" headerChecks;
in
@ -199,27 +327,29 @@ in
default = true;
description = "Whether to enable smtp in master.cf.";
};
enableSubmission = mkOption {
type = types.bool;
default = false;
description = "Whether to enable smtp submission";
description = "Whether to enable smtp submission.";
};
submissionOptions = mkOption {
type = types.attrs;
default = { "smtpd_tls_security_level" = "encrypt";
"smtpd_sasl_auth_enable" = "yes";
"smtpd_client_restrictions" = "permit_sasl_authenticated,reject";
"milter_macro_daemon_name" = "ORIGINATING";
};
default = {
smtpd_tls_security_level = "encrypt";
smtpd_sasl_auth_enable = "yes";
smtpd_client_restrictions = "permit_sasl_authenticated,reject";
milter_macro_daemon_name = "ORIGINATING";
};
example = {
smtpd_tls_security_level = "encrypt";
smtpd_sasl_auth_enable = "yes";
smtpd_sasl_type = "dovecot";
smtpd_client_restrictions = "permit_sasl_authenticated,reject";
milter_macro_daemon_name = "ORIGINATING";
};
description = "Options for the submission config in master.cf";
example = { "smtpd_tls_security_level" = "encrypt";
"smtpd_sasl_auth_enable" = "yes";
"smtpd_sasl_type" = "dovecot";
"smtpd_client_restrictions" = "permit_sasl_authenticated,reject";
"milter_macro_daemon_name" = "ORIGINATING";
};
};
setSendmail = mkOption {
@ -352,6 +482,25 @@ in
";
};
aliasMapType = mkOption {
type = with types; enum [ "hash" "regexp" "pcre" ];
default = "hash";
example = "regexp";
description = "The format the alias map should have. Use regexp if you want to use regular expressions.";
};
config = mkOption {
type = with types; attrsOf (either bool (either str (listOf str)));
default = defaultConf;
description = ''
The main.cf configuration file as key value set.
'';
example = {
mail_owner = "postfix";
smtp_use_tls = true;
};
};
extraConfig = mkOption {
type = types.lines;
default = "";
@ -395,6 +544,14 @@ in
";
};
virtualMapType = mkOption {
type = types.enum ["hash" "regexp" "pcre"];
default = "hash";
description = ''
What type of virtual alias map file to use. Use <literal>"regexp"</literal> for regular expressions.
'';
};
transport = mkOption {
default = "";
description = "
@ -413,6 +570,22 @@ in
description = "contents of check_client_access for overriding dnsBlacklists";
};
masterConfig = mkOption {
type = types.attrsOf (types.submodule masterCfOptions);
default = {};
example =
{ submission = {
type = "inet";
args = [ "-o" "smtpd_tls_security_level=encrypt" ];
};
};
description = ''
An attribute set of service options, which correspond to the service
definitions usually done within the Postfix
<filename>master.cf</filename> file.
'';
};
extraMasterConf = mkOption {
type = types.lines;
default = "";
@ -420,6 +593,27 @@ in
description = "Extra lines to append to the generated master.cf file.";
};
enableHeaderChecks = mkOption {
type = types.bool;
default = false;
example = true;
description = "Whether to enable postfix header checks";
};
headerChecks = mkOption {
type = types.listOf (types.submodule headerCheckOptions);
default = [];
example = [ { pattern = "/^X-Spam-Flag:/"; action = "REDIRECT spam@example.com"; } ];
description = "Postfix header checks.";
};
extraHeaderChecks = mkOption {
type = types.lines;
default = "";
example = "/^X-Spam-Flag:/ REDIRECT spam@example.com";
description = "Extra lines to /etc/postfix/header_checks file.";
};
aliasFiles = mkOption {
type = types.attrsOf types.path;
default = {};
@ -530,6 +724,101 @@ in
${pkgs.postfix}/bin/postfix set-permissions config_directory=/var/lib/postfix/conf
'';
};
services.postfix.masterConfig = {
smtp_inet = {
name = "smtp";
type = "inet";
private = false;
command = "smtpd";
};
pickup = {
private = false;
wakeup = 60;
maxproc = 1;
};
cleanup = {
private = false;
maxproc = 0;
};
qmgr = {
private = false;
wakeup = 300;
maxproc = 1;
};
tlsmgr = {
wakeup = 1000;
wakeupUnusedComponent = false;
maxproc = 1;
};
rewrite = {
command = "trivial-rewrite";
};
bounce = {
maxproc = 0;
};
defer = {
maxproc = 0;
command = "bounce";
};
trace = {
maxproc = 0;
command = "bounce";
};
verify = {
maxproc = 1;
};
flush = {
private = false;
wakeup = 1000;
wakeupUnusedComponent = false;
maxproc = 0;
};
proxymap = {
command = "proxymap";
};
proxywrite = {
maxproc = 1;
command = "proxymap";
};
showq = {
private = false;
};
error = {};
retry = {
command = "error";
};
discard = {};
local = {
privileged = true;
};
virtual = {
privileged = true;
};
lmtp = {
};
anvil = {
maxproc = 1;
};
scache = {
maxproc = 1;
};
} // optionalAttrs cfg.enableSubmission {
submission = {
type = "inet";
private = false;
command = "smtpd";
args = let
mkKeyVal = opt: val: [ "-o" (opt + "=" + val) ];
in concatLists (mapAttrsToList mkKeyVal cfg.submissionOptions);
};
} // optionalAttrs cfg.enableSmtp {
smtp = {};
relay = {
command = "smtp";
args = [ "-o" "smtp_fallback_relay=" ];
};
};
}
(mkIf haveAliases {
@ -541,9 +830,14 @@ in
(mkIf haveVirtual {
services.postfix.mapFiles."virtual" = virtualFile;
})
(mkIf cfg.enableHeaderChecks {
services.postfix.mapFiles."header_checks" = headerChecksFile;
})
(mkIf (cfg.dnsBlacklists != []) {
services.postfix.mapFiles."client_access" = checkClientAccessFile;
})
(mkIf (cfg.extraConfig != "") {
warnings = [ "The services.postfix.extraConfig option was deprecated. Please use services.postfix.config instead." ];
})
]);
}

View File

@ -439,6 +439,8 @@ in {
environment.GITLAB_SHELL_CONFIG_PATH = gitlabEnv.GITLAB_SHELL_CONFIG_PATH;
path = with pkgs; [
gitAndTools.git
gnutar
gzip
openssh
gitlab-workhorse
];

View File

@ -62,8 +62,7 @@ let
name = "nixos-manual";
desktopName = "NixOS Manual";
genericName = "View NixOS documentation in a web browser";
# TODO: find a better icon (Nix logo + help overlay?)
icon = "system-help";
icon = "nix-snowflake";
exec = "${helpScript}/bin/nixos-help";
categories = "System";
};
@ -115,7 +114,7 @@ in
environment.systemPackages =
[ manual.manual helpScript ]
++ optional config.services.xserver.enable desktopItem
++ optionals config.services.xserver.enable [desktopItem pkgs.nixos-icons]
++ optional config.programs.man.enable manual.manpages;
boot.extraTTYs = mkIf cfg.showManual ["tty${toString cfg.ttyNumber}"];

View File

@ -82,7 +82,7 @@ in
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
test -d "${cfg.dataDir}" || {
test -d "${cfg.dataDir}/Plex Media Server" || {
echo "Creating initial Plex data directory in \"${cfg.dataDir}\"."
mkdir -p "${cfg.dataDir}/Plex Media Server"
chown -R ${cfg.user}:${cfg.group} "${cfg.dataDir}"

View File

@ -0,0 +1,152 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.snapper;
in
{
options.services.snapper = {
snapshotInterval = mkOption {
type = types.str;
default = "hourly";
description = ''
Snapshot interval.
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
cleanupInterval = mkOption {
type = types.str;
default = "1d";
description = ''
Cleanup interval.
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
filters = mkOption {
type = types.nullOr types.lines;
default = null;
description = ''
Global display difference filter. See man:snapper(8) for more details.
'';
};
configs = mkOption {
default = { };
example = literalExample {
"home" = {
subvolume = "/home";
extraConfig = ''
ALLOW_USERS="alice"
'';
};
};
description = ''
Subvolume configuration
'';
type = types.attrsOf (types.submodule {
options = {
subvolume = mkOption {
type = types.path;
description = ''
Path of the subvolume or mount point.
This path is a subvolume and has to contain a subvolume named
.snapshots.
See also man:snapper(8) section PERMISSIONS.
'';
};
fstype = mkOption {
type = types.enum [ "btrfs" ];
default = "btrfs";
description = ''
Filesystem type. Only btrfs is stable and tested.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional configuration next to SUBVOLUME and FSTYPE.
See man:snapper-configs(5).
'';
};
};
});
};
};
config = mkIf (cfg.configs != {}) (let
documentation = [ "man:snapper(8)" "man:snapper-configs(5)" ];
in {
environment = {
systemPackages = [ pkgs.snapper ];
# Note: snapper/config-templates/default is only needed for create-config
# which is not the NixOS way to configure.
etc = {
"sysconfig/snapper".text = ''
SNAPPER_CONFIGS="${lib.concatStringsSep " " (builtins.attrNames cfg.configs)}"
'';
}
// (mapAttrs' (name: subvolume: nameValuePair "snapper/configs/${name}" ({
text = ''
${subvolume.extraConfig}
FSTYPE="${subvolume.fstype}"
SUBVOLUME="${subvolume.subvolume}"
'';
})) cfg.configs)
// (lib.optionalAttrs (cfg.filters != null) {
"snapper/filters/default.txt".text = cfg.filters;
});
};
services.dbus.packages = [ pkgs.snapper ];
systemd.services.snapper-timeline = {
description = "Timeline of Snapper Snapshots";
inherit documentation;
serviceConfig.ExecStart = "${pkgs.snapper}/lib/snapper/systemd-helper --timeline";
};
systemd.timers.snapper-timeline = {
description = "Timeline of Snapper Snapshots";
inherit documentation;
wantedBy = [ "basic.target" ];
timerConfig.OnCalendar = cfg.snapshotInterval;
};
systemd.services.snapper-cleanup = {
description = "Cleanup of Snapper Snapshots";
inherit documentation;
serviceConfig.ExecStart = "${pkgs.snapper}/lib/snapper/systemd-helper --cleanup";
};
systemd.timers.snapper-cleanup = {
description = "Cleanup of Snapper Snapshots";
inherit documentation;
wantedBy = [ "basic.target" ];
timerConfig.OnBootSec = "10m";
timerConfig.OnUnitActiveSec = cfg.cleanupInterval;
};
});
}

View File

@ -23,7 +23,7 @@ in
'';
serviceConfig = {
Type = "forking";
ExecStart = "/bin/sh -c '${pkgs.spice-vdagent}/bin/spice-vdagentd'";
ExecStart = "${pkgs.spice-vdagent}/bin/spice-vdagentd";
};
};
};

View File

@ -448,6 +448,8 @@ def cli(ctx):
"""
Manage Taskserver users and certificates
"""
if not IS_AUTO_CONFIG:
return
for path in (CA_KEY, CA_CERT, CRL_FILE):
if not os.path.exists(path):
msg = "CA setup not done or incomplete, missing file {}."

View File

@ -7,7 +7,6 @@ let
conf = pkgs.writeText "collectd.conf" ''
BaseDir "${cfg.dataDir}"
PIDFile "${cfg.pidFile}"
AutoLoadPlugin ${boolToString cfg.autoLoadPlugin}
Hostname "${config.networking.hostName}"
@ -26,13 +25,7 @@ let
in {
options.services.collectd = with types; {
enable = mkOption {
default = false;
description = ''
Whether to enable collectd agent.
'';
type = bool;
};
enable = mkEnableOption "collectd agent";
package = mkOption {
default = pkgs.collectd;
@ -59,14 +52,6 @@ in {
type = path;
};
pidFile = mkOption {
default = "/var/run/collectd.pid";
description = ''
Location of collectd pid file.
'';
type = path;
};
autoLoadPlugin = mkOption {
default = false;
description = ''
@ -100,27 +85,20 @@ in {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/sbin/collectd -C ${conf} -P ${cfg.pidFile}";
Type = "forking";
PIDFile = cfg.pidFile;
User = optional (cfg.user!="root") cfg.user;
ExecStart = "${cfg.package}/sbin/collectd -C ${conf} -f";
User = cfg.user;
PermissionsStartOnly = true;
};
preStart = ''
mkdir -p ${cfg.dataDir}
chmod 755 ${cfg.dataDir}
install -D /dev/null ${cfg.pidFile}
if [ "$(id -u)" = 0 ]; then
chown -R ${cfg.user} ${cfg.dataDir};
chown ${cfg.user} ${cfg.pidFile}
fi
mkdir -p "${cfg.dataDir}"
chmod 755 "${cfg.dataDir}"
chown -R ${cfg.user} "${cfg.dataDir}"
'';
};
};
users.extraUsers = optional (cfg.user == "collectd") {
name = "collectd";
uid = config.ids.uids.collectd;
};
};
}

View File

@ -66,15 +66,6 @@ let
How frequently to evaluate rules by default.
'';
};
labels = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
The labels to add to any timeseries that this Prometheus instance
scrapes.
'';
};
};
};

View File

@ -7,6 +7,10 @@ let
cfg = config.services.bitlbee;
bitlbeeUid = config.ids.uids.bitlbee;
bitlbeePkg = if cfg.libpurple_plugins == []
then pkgs.bitlbee
else pkgs.bitlbee.override { enableLibPurple = true; };
bitlbeeConfig = pkgs.writeText "bitlbee.conf"
''
[settings]
@ -25,6 +29,12 @@ let
${cfg.extraDefaults}
'';
purple_plugin_path =
lib.concatMapStringsSep ":"
(plugin: "${plugin}/lib/pidgin/")
cfg.libpurple_plugins
;
in
{
@ -90,6 +100,15 @@ in
'';
};
libpurple_plugins = mkOption {
type = types.listOf types.package;
default = [];
example = literalExample "[ pkgs.purple-matrix ]";
description = ''
The list of libpurple plugins to install.
'';
};
configDir = mkOption {
default = "/var/lib/bitlbee";
type = types.path;
@ -144,14 +163,16 @@ in
};
systemd.services.bitlbee =
{ description = "BitlBee IRC to other chat networks gateway";
{
environment.PURPLE_PLUGIN_PATH = purple_plugin_path;
description = "BitlBee IRC to other chat networks gateway";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig.User = "bitlbee";
serviceConfig.ExecStart = "${pkgs.bitlbee}/sbin/bitlbee -F -n -c ${bitlbeeConfig}";
serviceConfig.ExecStart = "${bitlbeePkg}/sbin/bitlbee -F -n -c ${bitlbeeConfig}";
};
environment.systemPackages = [ pkgs.bitlbee ];
environment.systemPackages = [ bitlbeePkg ];
};

View File

@ -11,7 +11,7 @@ let
trim = chars: str: let
nonchars = filter (x : !(elem x.value chars))
(imap (i: v: {ind = (sub i 1); value = v;}) (stringToCharacters str));
(imap0 (i: v: {ind = i; value = v;}) (stringToCharacters str));
in
if length nonchars == 0 then ""
else substring (head nonchars).ind (add 1 (sub (last nonchars).ind (head nonchars).ind)) str;

View File

@ -9,15 +9,18 @@ let
# /var/lib/misc is for dnsmasq.leases.
stateDirs = "/var/lib/NetworkManager /var/lib/dhclient /var/lib/misc";
dns =
if cfg.useDnsmasq then "dnsmasq"
else if config.services.resolved.enable then "systemd-resolved"
else "default";
configFile = writeText "NetworkManager.conf" ''
[main]
plugins=keyfile
dhcp=${cfg.dhcp}
dns=${if cfg.useDnsmasq then "dnsmasq" else "default"}
dns=${dns}
[keyfile]
${optionalString (config.networking.hostName != "")
''hostname=${config.networking.hostName}''}
${optionalString (cfg.unmanaged != [])
''unmanaged-devices=${lib.concatStringsSep ";" cfg.unmanaged}''}
@ -255,7 +258,7 @@ in {
{ source = overrideNameserversScript;
target = "NetworkManager/dispatcher.d/02overridedns";
}
++ lib.imap (i: s: {
++ lib.imap1 (i: s: {
inherit (s) source;
target = "NetworkManager/dispatcher.d/${dispatcherTypesSubdirMap.${s.type}}03userscript${lib.fixedWidthNumber 4 i}";
}) cfg.dispatcherScripts;

View File

@ -21,6 +21,8 @@ let
daemon reads in addition to the the user's authorized_keys file.
You can combine the <literal>keys</literal> and
<literal>keyFiles</literal> options.
Warning: If you are using <literal>NixOps</literal> then don't use this
option since it will replace the key required for deployment via ssh.
'';
};

View File

@ -120,7 +120,7 @@ in
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ kmod iproute iptables utillinux ]; # XXX Linux
wants = [ "keys.target" ];
after = [ "network.target" "keys.target" ];
after = [ "network-online.target" "keys.target" ];
environment = {
STRONGSWAN_CONF = strongswanConf { inherit setup connections ca secrets; };
};

View File

@ -79,7 +79,15 @@ in
default = null;
type = types.nullOr types.str;
description = ''
The ip adress to bind to.
The ip address to listen on for incoming connections.
'';
};
bindToAddress = mkOption {
default = null;
type = types.nullOr types.str;
description = ''
The ip address to bind to (both listen on and send packets from).
'';
};
@ -131,7 +139,8 @@ in
Name = ${if data.name == null then "$HOST" else data.name}
DeviceType = ${data.interfaceType}
${optionalString (data.ed25519PrivateKeyFile != null) "Ed25519PrivateKeyFile = ${data.ed25519PrivateKeyFile}"}
${optionalString (data.listenAddress != null) "BindToAddress = ${data.listenAddress}"}
${optionalString (data.listenAddress != null) "ListenAddress = ${data.listenAddress}"}
${optionalString (data.bindToAddress != null) "BindToAddress = ${data.bindToAddress}"}
Device = /dev/net/tun
Interface = tinc.${network}
${data.extraConfig}

View File

@ -18,6 +18,13 @@ with lib;
default = 33445;
description = "udp port for toxcore, port-forward to help with connectivity if you run many nodes behind one NAT";
};
auto_add_peers = mkOption {
type = types.listOf types.string;
default = [];
example = ''[ "toxid1" "toxid2" ]'';
description = "peers to automacally connect to on startup";
};
};
};
@ -33,8 +40,13 @@ with lib;
chown toxvpn /run/toxvpn
'';
path = [ pkgs.toxvpn ];
script = ''
exec toxvpn -i ${config.services.toxvpn.localip} -l /run/toxvpn/control -u toxvpn -p ${toString config.services.toxvpn.port} ${lib.concatMapStringsSep " " (x: "-a ${x}") config.services.toxvpn.auto_add_peers}
'';
serviceConfig = {
ExecStart = "${pkgs.toxvpn}/bin/toxvpn -i ${config.services.toxvpn.localip} -l /run/toxvpn/control -u toxvpn -p ${toString config.services.toxvpn.port}";
KillMode = "process";
Restart = "on-success";
Type = "notify";
@ -43,6 +55,8 @@ with lib;
restartIfChanged = false; # Likely to be used for remote admin
};
environment.systemPackages = [ pkgs.toxvpn ];
users.extraUsers = {
toxvpn = {
uid = config.ids.uids.toxvpn;

View File

@ -23,8 +23,23 @@ let
privateKey = mkOption {
example = "yAnz5TF+lXXJte14tji3zlMNq+hd2rYUIgJBgB3fBmk=";
type = types.str;
description = "Base64 private key generated by wg genkey.";
type = with types; nullOr str;
default = null;
description = ''
Base64 private key generated by wg genkey.
Warning: Consider using privateKeyFile instead if you do not
want to store the key in the world-readable Nix store.
'';
};
privateKeyFile = mkOption {
example = "/private/wireguard_key";
type = with types; nullOr str;
default = null;
description = ''
Private key file as generated by wg genkey.
'';
};
listenPort = mkOption {
@ -91,7 +106,22 @@ let
example = "rVXs/Ni9tu3oDBLS4hOyAUAa1qTWVA3loR8eL20os3I=";
type = with types; nullOr str;
description = ''
base64 preshared key generated by wg genpsk. Optional,
Base64 preshared key generated by wg genpsk. Optional,
and may be omitted. This option adds an additional layer of
symmetric-key cryptography to be mixed into the already existing
public-key cryptography, for post-quantum resistance.
Warning: Consider using presharedKeyFile instead if you do not
want to store the key in the world-readable Nix store.
'';
};
presharedKeyFile = mkOption {
default = null;
example = "/private/wireguard_psk";
type = with types; nullOr str;
description = ''
File pointing to preshared key as generated by wg pensk. Optional,
and may be omitted. This option adds an additional layer of
symmetric-key cryptography to be mixed into the already existing
public-key cryptography, for post-quantum resistance.
@ -134,54 +164,59 @@ let
};
generateConf = name: values: pkgs.writeText "wireguard-${name}.conf" ''
[Interface]
PrivateKey = ${values.privateKey}
${optionalString (values.listenPort != null) "ListenPort = ${toString values.listenPort}"}
${concatStringsSep "\n\n" (map (peer: ''
[Peer]
PublicKey = ${peer.publicKey}
${optionalString (peer.presharedKey != null) "PresharedKey = ${peer.presharedKey}"}
${optionalString (peer.allowedIPs != []) "AllowedIPs = ${concatStringsSep ", " peer.allowedIPs}"}
${optionalString (peer.endpoint != null) "Endpoint = ${peer.endpoint}"}
${optionalString (peer.persistentKeepalive != null) "PersistentKeepalive = ${toString peer.persistentKeepalive}"}
'') values.peers)}
'';
ipCommand = "${pkgs.iproute}/bin/ip";
wgCommand = "${pkgs.wireguard}/bin/wg";
generateUnit = name: values:
# exactly one way to specify the private key must be set
assert (values.privateKey != null) != (values.privateKeyFile != null);
let privKey = if values.privateKeyFile != null then values.privateKeyFile else pkgs.writeText "wg-key" values.privateKey;
in
nameValuePair "wireguard-${name}"
{
description = "WireGuard Tunnel - ${name}";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = lib.flatten([
ExecStart = flatten([
values.preSetup
"-${ipCommand} link del dev ${name}"
"${ipCommand} link add dev ${name} type wireguard"
"${wgCommand} setconf ${name} ${generateConf name values}"
(map (ip:
''${ipCommand} address add ${ip} dev ${name}''
"${ipCommand} address add ${ip} dev ${name}"
) values.ips)
("${wgCommand} set ${name} private-key ${privKey}" +
optionalString (values.listenPort != null) " listen-port ${toString values.listenPort}")
(map (peer:
assert (peer.presharedKeyFile == null) || (peer.presharedKey == null); # at most one of the two must be set
let psk = if peer.presharedKey != null then pkgs.writeText "wg-psk" peer.presharedKey else peer.presharedKeyFile;
in
"${wgCommand} set ${name} peer ${peer.publicKey}" +
optionalString (psk != null) " preshared-key ${psk}" +
optionalString (peer.endpoint != null) " endpoint ${peer.endpoint}" +
optionalString (peer.persistentKeepalive != null) " persistent-keepalive ${toString peer.persistentKeepalive}" +
optionalString (peer.allowedIPs != []) " allowed-ips ${concatStringsSep "," peer.allowedIPs}"
) values.peers)
"${ipCommand} link set up dev ${name}"
(flatten (map (peer: (map (ip:
(map (peer: (map (ip:
"${ipCommand} route add ${ip} dev ${name}"
) peer.allowedIPs)) values.peers))
) peer.allowedIPs)) values.peers)
values.postSetup
]);
ExecStop = [ ''${ipCommand} link del dev "${name}"'' ] ++ values.postShutdown;
ExecStop = flatten([
"${ipCommand} link del dev ${name}"
values.postShutdown
]);
};
};

View File

@ -37,7 +37,7 @@ let
[ cups.out additionalBackends cups-filters pkgs.ghostscript ]
++ optional cfg.gutenprint gutenprint
++ cfg.drivers;
pathsToLink = [ "/lib/cups" "/share/cups" "/bin" ];
pathsToLink = [ "/lib" "/share/cups" "/bin" ];
postBuild = cfg.bindirCmds;
ignoreCollisions = true;
};
@ -324,6 +324,8 @@ in
fi
''}
'';
serviceConfig.PrivateTmp = true;
};
systemd.services.cups-browsed = mkIf avahiEnabled

View File

@ -21,21 +21,20 @@ let
'';
github = cfg: ''
$(optionalString (!isNull cfg.github.org) "--github-org=${cfg.github.org}") \
$(optionalString (!isNull cfg.github.team) "--github-org=${cfg.github.team}") \
${optionalString (!isNull cfg.github.org) "--github-org=${cfg.github.org}"} \
${optionalString (!isNull cfg.github.team) "--github-org=${cfg.github.team}"} \
'';
google = cfg: ''
--google-admin-email=${cfg.google.adminEmail} \
--google-service-account=${cfg.google.serviceAccountJSON} \
$(repeatedArgs (group: "--google-group=${group}") cfg.google.groups) \
${repeatedArgs (group: "--google-group=${group}") cfg.google.groups} \
'';
};
authenticatedEmailsFile = pkgs.writeText "authenticated-emails" cfg.email.addresses;
getProviderOptions = cfg: provider:
if providerSpecificOptions ? provider then providerSpecificOptions.provider cfg else "";
getProviderOptions = cfg: provider: providerSpecificOptions.${provider} or (_: "") cfg;
mkCommandLine = cfg: ''
--provider='${cfg.provider}' \

View File

@ -0,0 +1,143 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.vault;
configFile = pkgs.writeText "vault.hcl" ''
listener "tcp" {
address = "${cfg.address}"
${if (cfg.tlsCertFile == null || cfg.tlsKeyFile == null) then ''
tls_disable = "true"
'' else ''
tls_cert_file = "${cfg.tlsCertFile}"
tls_key_file = "${cfg.tlsKeyFile}"
''}
${cfg.listenerExtraConfig}
}
storage "${cfg.storageBackend}" {
${optionalString (cfg.storagePath != null) ''path = "${cfg.storagePath}"''}
${optionalString (cfg.storageConfig != null) cfg.storageConfig}
}
${optionalString (cfg.telemetryConfig != "") ''
telemetry {
${cfg.telemetryConfig}
}
''}
'';
in
{
options = {
services.vault = {
enable = mkEnableOption "Vault daemon";
address = mkOption {
type = types.str;
default = "127.0.0.1:8200";
description = "The name of the ip interface to listen to";
};
tlsCertFile = mkOption {
type = types.nullOr types.str;
default = null;
example = "/path/to/your/cert.pem";
description = "TLS certificate file. TLS will be disabled unless this option is set";
};
tlsKeyFile = mkOption {
type = types.nullOr types.str;
default = null;
example = "/path/to/your/key.pem";
description = "TLS private key file. TLS will be disabled unless this option is set";
};
listenerExtraConfig = mkOption {
type = types.lines;
default = ''
tls_min_version = "tls12"
'';
description = "extra configuration";
};
storageBackend = mkOption {
type = types.enum [ "inmem" "file" "consul" "zookeeper" "s3" "azure" "dynamodb" "etcd" "mssql" "mysql" "postgresql" "swift" "gcs" ];
default = "inmem";
description = "The name of the type of storage backend";
};
storagePath = mkOption {
type = types.nullOr types.path;
default = if cfg.storageBackend == "file" then "/var/lib/vault" else null;
description = "Data directory for file backend";
};
storageConfig = mkOption {
type = types.nullOr types.lines;
default = null;
description = "Storage configuration";
};
telemetryConfig = mkOption {
type = types.lines;
default = "";
description = "Telemetry configuration";
};
};
};
config = mkIf cfg.enable {
assertions = [
{ assertion = cfg.storageBackend == "inmem" -> (cfg.storagePath == null && cfg.storageConfig == null);
message = ''The "inmem" storage expects no services.vault.storagePath nor services.vault.storageConfig'';
}
{ assertion = (cfg.storageBackend == "file" -> (cfg.storagePath != null && cfg.storageConfig == null)) && (cfg.storagePath != null -> cfg.storageBackend == "file");
message = ''You must set services.vault.storagePath only when using the "file" backend'';
}
];
users.extraUsers.vault = {
name = "vault";
group = "vault";
uid = config.ids.uids.vault;
description = "Vault daemon user";
};
users.extraGroups.vault.gid = config.ids.gids.vault;
systemd.services.vault = {
description = "Vault server daemon";
wantedBy = ["multi-user.target"];
after = [ "network.target" ]
++ optional (config.services.consul.enable && cfg.storageBackend == "consul") "consul.service";
restartIfChanged = false; # do not restart on "nixos-rebuild switch". It would seal the storage and disrupt the clients.
preStart = optionalString (cfg.storagePath != null) ''
install -d -m0700 -o vault -g vault "${cfg.storagePath}"
'';
serviceConfig = {
User = "vault";
Group = "vault";
PermissionsStartOnly = true;
ExecStart = "${pkgs.vault}/bin/vault server -config ${configFile}";
PrivateDevices = true;
PrivateTmp = true;
ProtectSystem = "full";
ProtectHome = "read-only";
AmbientCapabilities = "cap_ipc_lock";
NoNewPrivileges = true;
KillSignal = "SIGINT";
TimeoutStopSec = "30s";
Restart = "on-failure";
StartLimitInterval = "60s";
StartLimitBurst = 3;
};
unitConfig.RequiresMountsFor = optional (cfg.storagePath != null) cfg.storagePath;
};
};
}

View File

@ -0,0 +1,63 @@
{ config, lib, pkgs, ... }:
with lib;
let
nssModulesPath = config.system.nssModules.path;
cfg = config.services.saslauthd;
in
{
###### interface
options = {
services.saslauthd = {
enable = mkEnableOption "Whether to enable the Cyrus SASL authentication daemon.";
package = mkOption {
default = pkgs.cyrus_sasl.bin;
defaultText = "pkgs.cyrus_sasl.bin";
type = types.package;
description = "Cyrus SASL package to use.";
};
mechanism = mkOption {
type = types.str;
default = "pam";
description = "Auth mechanism to use";
};
config = mkOption {
type = types.lines;
default = "";
description = "Configuration to use for Cyrus SASL authentication daemon.";
};
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.saslauthd = {
description = "Cyrus SASL authentication daemon";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "@${cfg.package}/sbin/saslauthd saslauthd -a ${cfg.mechanism} -O ${pkgs.writeText "saslauthd.conf" cfg.config}";
Type = "forking";
PIDFile = "/run/saslauthd/saslauthd.pid";
Restart = "always";
};
};
};
}

View File

@ -0,0 +1,97 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-piwik">
<title>Piwik</title>
<para>
Piwik is a real-time web analytics application.
This module configures php-fpm as backend for piwik, optionally configuring an nginx vhost as well.
</para>
<para>
An automatic setup is not suported by piwik, so you need to configure piwik itself in the browser-based piwik setup.
</para>
<section>
<title>Database Setup</title>
<para>
You also need to configure a MariaDB or MySQL database and -user for piwik yourself,
and enter those credentials in your browser.
You can use passwordless database authentication via the UNIX_SOCKET authentication plugin
with the following SQL commands:
<programlisting>
INSTALL PLUGIN unix_socket SONAME 'auth_socket';
ALTER USER root IDENTIFIED VIA unix_socket;
CREATE DATABASE piwik;
CREATE USER 'piwik'@'localhost' IDENTIFIED VIA unix_socket;
GRANT ALL PRIVILEGES ON piwik.* TO 'piwik'@'localhost';
</programlisting>
Then fill in <literal>piwik</literal> as database user and database name, and leave the password field blank.
This works with MariaDB and MySQL. This authentication works by allowing only the <literal>piwik</literal> unix
user to authenticate as <literal>piwik</literal> database (without needing a password), but no other users.
For more information on passwordless login, see
<link xlink:href="https://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/" />.
</para>
<para>
Of course, you can use password based authentication as well, e.g. when the database is not on the same host.
</para>
</section>
<section>
<title>Backup</title>
<para>
You only need to take backups of your MySQL database and the
<filename>/var/lib/piwik/config/config.ini.php</filename> file.
Use a user in the <literal>piwik</literal> group or root to access the file.
For more information, see <link xlink:href="https://piwik.org/faq/how-to-install/faq_138/" />.
</para>
</section>
<section>
<title>Issues</title>
<itemizedlist>
<listitem>
<para>
Piwik's file integrity check will warn you.
This is due to the patches necessary for NixOS, you can safely ignore this.
</para>
</listitem>
<listitem>
<para>
Piwik will warn you that the JavaScript tracker is not writable.
This is because it's located in the read-only nix store.
You can safely ignore this, unless you need a plugin that needs JavaScript tracker access.
</para>
</listitem>
<listitem>
<para>
Sending mail from piwik, e.g. for the password reset function, might not work out of the box:
There's a problem with using <command>sendmail</command> from <literal>php-fpm</literal> that is
being investigated at <link xlink:href="https://github.com/NixOS/nixpkgs/issues/26611" />.
If you have (or don't have) this problem as well, please report it. You can enable SMTP as method
to send mail in piwik's <quote>General Settings</quote> > <quote>Mail Server Settings</quote> instead.
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Using other Web Servers than nginx</title>
<para>
You can use other web servers by forwarding calls for <filename>index.php</filename> and
<filename>piwik.php</filename> to the <literal>/run/phpfpm-piwik.sock</literal> fastcgi unix socket.
You can use the nginx configuration in the module code as a reference to what else should be configured.
</para>
</section>
</chapter>

View File

@ -0,0 +1,219 @@
{ config, lib, pkgs, services, ... }:
with lib;
let
cfg = config.services.piwik;
user = "piwik";
dataDir = "/var/lib/${user}";
pool = user;
# it's not possible to use /run/phpfpm/${pool}.sock because /run/phpfpm/ is root:root 0770,
# and therefore is not accessible by the web server.
phpSocket = "/run/phpfpm-${pool}.sock";
phpExecutionUnit = "phpfpm-${pool}";
databaseService = "mysql.service";
in {
options = {
services.piwik = {
# NixOS PR for database setup: https://github.com/NixOS/nixpkgs/pull/6963
# piwik issue for automatic piwik setup: https://github.com/piwik/piwik/issues/10257
# TODO: find a nice way to do this when more NixOS MySQL and / or piwik automatic setup stuff is implemented.
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable piwik web analytics with php-fpm backend.
'';
};
webServerUser = mkOption {
type = types.str;
example = "nginx";
description = ''
Name of the owner of the ${phpSocket} fastcgi socket for piwik.
If you want to use another webserver than nginx, you need to set this to that server's user
and pass fastcgi requests to `index.php` and `piwik.php` to this socket.
'';
};
phpfpmProcessManagerConfig = mkOption {
type = types.str;
default = ''
; default phpfpm process manager settings
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; log worker's stdout, but this has a performance hit
catch_workers_output = yes
'';
description = ''
Settings for phpfpm's process manager. You might need to change this depending on the load for piwik.
'';
};
nginx = mkOption {
# TODO: for maximum flexibility, it would be nice to use nginx's vhost_options module
# but this only makes sense if we can somehow specify defaults suitable for piwik.
# But users can always copy the piwik nginx config to their configuration.nix and customize it.
type = types.nullOr (types.submodule {
options = {
virtualHost = mkOption {
type = types.str;
default = "piwik.${config.networking.hostName}";
example = "piwik.$\{config.networking.hostName\}";
description = ''
Name of the nginx virtualhost to use and set up.
'';
};
enableSSL = mkOption {
type = types.bool;
default = true;
description = "Whether to enable https.";
};
forceSSL = mkOption {
type = types.bool;
default = true;
description = "Whether to always redirect to https.";
};
enableACME = mkOption {
type = types.bool;
default = true;
description = "Whether to ask Let's Encrypt to sign a certificate for this vhost.";
};
};
});
default = null;
example = { virtualHost = "stats.$\{config.networking.hostName\}"; };
description = ''
The options to use to configure an nginx virtualHost.
If null (the default), no nginx virtualHost will be configured.
'';
};
};
};
config = mkIf cfg.enable {
users.extraUsers.${user} = {
isSystemUser = true;
createHome = true;
home = dataDir;
group = user;
};
users.extraGroups.${user} = {};
systemd.services.piwik_setup_update = {
# everything needs to set up and up to date before piwik php files are executed
requiredBy = [ "${phpExecutionUnit}.service" ];
before = [ "${phpExecutionUnit}.service" ];
# the update part of the script can only work if the database is already up and running
requires = [ databaseService ];
after = [ databaseService ];
path = [ pkgs.piwik ];
serviceConfig = {
Type = "oneshot";
User = user;
# hide especially config.ini.php from other
UMask = "0007";
Environment = "PIWIK_USER_PATH=${dataDir}";
# chown + chmod in preStart needs root
PermissionsStartOnly = true;
};
# correct ownership and permissions in case they're not correct anymore,
# e.g. after restoring from backup or moving from another system.
# Note that ${dataDir}/config/config.ini.php might contain the MySQL password.
preStart = ''
chown -R ${user}:${user} ${dataDir}
chmod -R ug+rwX,o-rwx ${dataDir}
'';
script = ''
# Use User-Private Group scheme to protect piwik data, but allow administration / backup via piwik group
# Copy config folder
chmod g+s "${dataDir}"
cp -r "${pkgs.piwik}/config" "${dataDir}/"
chmod -R u+rwX,g+rwX,o-rwx "${dataDir}"
# check whether user setup has already been done
if test -f "${dataDir}/config/config.ini.php"; then
# then execute possibly pending database upgrade
piwik-console core:update --yes
fi
'';
};
systemd.services.${phpExecutionUnit} = {
# stop phpfpm on package upgrade, do database upgrade via piwik_setup_update, and then restart
restartTriggers = [ pkgs.piwik ];
# stop config.ini.php from getting written with read permission for others
serviceConfig.UMask = "0007";
};
services.phpfpm.poolConfigs = {
${pool} = ''
listen = "${phpSocket}"
listen.owner = ${cfg.webServerUser}
listen.group = root
listen.mode = 0600
user = ${user}
env[PIWIK_USER_PATH] = ${dataDir}
${cfg.phpfpmProcessManagerConfig}
'';
};
services.nginx.virtualHosts = mkIf (cfg.nginx != null) {
# References:
# https://fralef.me/piwik-hardening-with-nginx-and-php-fpm.html
# https://github.com/perusio/piwik-nginx
${cfg.nginx.virtualHost} = {
root = "${pkgs.piwik}/share";
enableSSL = cfg.nginx.enableSSL;
enableACME = cfg.nginx.enableACME;
forceSSL = cfg.nginx.forceSSL;
locations."/" = {
index = "index.php";
};
# allow index.php for webinterface
locations."= /index.php".extraConfig = ''
fastcgi_pass unix:${phpSocket};
'';
# allow piwik.php for tracking
locations."= /piwik.php".extraConfig = ''
fastcgi_pass unix:${phpSocket};
'';
# Any other attempt to access any php files is forbidden
locations."~* ^.+\.php$".extraConfig = ''
return 403;
'';
# Disallow access to unneeded directories
# config and tmp are already removed
locations."~ ^/(?:core|lang|misc)/".extraConfig = ''
return 403;
'';
# Disallow access to several helper files
locations."~* \.(?:bat|git|ini|sh|txt|tpl|xml|md)$".extraConfig = ''
return 403;
'';
# No crawling of this site for bots that obey robots.txt - no useful information here.
locations."= /robots.txt".extraConfig = ''
return 200 "User-agent: *\nDisallow: /\n";
'';
# let browsers cache piwik.js
locations."= /piwik.js".extraConfig = ''
expires 1M;
'';
};
};
};
meta = {
doc = ./piwik-doc.xml;
maintainers = with stdenv.lib.maintainers; [ florianjacob ];
};
}

View File

@ -0,0 +1,58 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.lighttpd.collectd;
collectionConf = pkgs.writeText "collection.conf" ''
datadir: "${config.services.collectd.dataDir}"
libdir: "${config.services.collectd.package}/lib/collectd"
'';
defaultCollectionCgi = config.services.collectd.package.overrideDerivation(old: {
name = "collection.cgi";
configurePhase = "true";
buildPhase = "true";
installPhase = ''
substituteInPlace contrib/collection.cgi --replace '"/etc/collection.conf"' '$ENV{COLLECTION_CONF}'
cp contrib/collection.cgi $out
'';
});
in
{
options.services.lighttpd.collectd = {
enable = mkEnableOption "collectd subservice accessible at http://yourserver/collectd";
collectionCgi = mkOption {
type = types.path;
default = defaultCollectionCgi;
description = ''
Path to collection.cgi script from (collectd sources)/contrib/collection.cgi
This option allows to use a customized version
'';
};
};
config = mkIf cfg.enable {
services.lighttpd.enableModules = [ "mod_cgi" "mod_alias" "mod_setenv" ];
services.lighttpd.extraConfig = ''
$HTTP["url"] =~ "^/collectd" {
cgi.assign = (
".cgi" => "${pkgs.perl}/bin/perl"
)
alias.url = (
"/collectd" => "${cfg.collectionCgi}"
)
setenv.add-environment = (
"PERL5LIB" => "${with pkgs; lib.makePerlPath [ perlPackages.CGI perlPackages.HTMLParser perlPackages.URI rrdtool ]}",
"COLLECTION_CONF" => "${collectionConf}"
)
}
'';
};
}

View File

@ -177,7 +177,7 @@ in
configText = mkOption {
default = "";
type = types.lines;
example = ''...verbatim config file contents...'';
example = ''...verbatim config file contents...'';
description = ''
Overridable config file contents to use for lighttpd. By default, use
the contents automatically generated by NixOS.

View File

@ -0,0 +1,111 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.minio;
in
{
meta.maintainers = [ maintainers.bachp ];
options.services.minio = {
enable = mkEnableOption "Minio Object Storage";
listenAddress = mkOption {
default = ":9000";
type = types.str;
description = "Listen on a specific IP address and port.";
};
dataDir = mkOption {
default = "/var/lib/minio/data";
type = types.path;
description = "The data directory, for storing the objects.";
};
configDir = mkOption {
default = "/var/lib/minio/config";
type = types.path;
description = "The config directory, for the access keys and other settings.";
};
accessKey = mkOption {
default = "";
type = types.str;
description = ''
Access key of 5 to 20 characters in length that clients use to access the server.
This overrides the access key that is generated by minio on first startup and stored inside the
<literal>configDir</literal> directory.
'';
};
secretKey = mkOption {
default = "";
type = types.str;
description = ''
Specify the Secret key of 8 to 40 characters in length that clients use to access the server.
This overrides the secret key that is generated by minio on first startup and stored inside the
<literal>configDir</literal> directory.
'';
};
region = mkOption {
default = "us-east-1";
type = types.str;
description = ''
The physical location of the server. By default it is set to us-east-1, which is same as AWS S3's and Minio's default region.
'';
};
browser = mkOption {
default = true;
type = types.bool;
description = "Enable or disable access to web UI.";
};
package = mkOption {
default = pkgs.minio;
defaultText = "pkgs.minio";
type = types.package;
description = "Minio package to use.";
};
};
config = mkIf cfg.enable {
systemd.services.minio = {
description = "Minio Object Storage";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
# Make sure directories exist with correct owner
mkdir -p ${cfg.configDir}
chown -R minio:minio ${cfg.configDir}
mkdir -p ${cfg.dataDir}
chown minio:minio ${cfg.dataDir}
'';
serviceConfig = {
PermissionsStartOnly = true;
ExecStart = "${cfg.package}/bin/minio server --address ${cfg.listenAddress} --config-dir=${cfg.configDir} ${cfg.dataDir}";
Type = "simple";
User = "minio";
Group = "minio";
LimitNOFILE = 65536;
};
environment = {
MINIO_REGION = "${cfg.region}";
MINIO_BROWSER = "${if cfg.browser then "on" else "off"}";
} // optionalAttrs (cfg.accessKey != "") {
MINIO_ACCESS_KEY = "${cfg.accessKey}";
} // optionalAttrs (cfg.secretKey != "") {
MINIO_SECRET_KEY = "${cfg.secretKey}";
};
};
users.extraUsers.minio = {
group = "minio";
uid = config.ids.uids.minio;
};
users.extraGroups.minio.gid = config.ids.uids.minio;
};
}

View File

@ -65,6 +65,7 @@ let
gzip_proxied any;
gzip_comp_level 9;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_vary on;
''}
${optionalString (cfg.recommendedProxySettings) ''
@ -123,45 +124,49 @@ let
vhosts = concatStringsSep "\n" (mapAttrsToList (vhostName: vhost:
let
serverName = vhost.serverName;
ssl = vhost.enableSSL || vhost.forceSSL;
port = if vhost.port != null then vhost.port else (if ssl then 443 else 80);
listenString = toString port + optionalString ssl " ssl http2"
+ optionalString vhost.default " default_server";
acmeLocation = optionalString vhost.enableACME (''
defaultPort = if ssl then 443 else 80;
listenString = { addr, port, ... }:
"listen ${addr}:${toString (if port != null then port else defaultPort)} "
+ optionalString ssl "ssl http2 "
+ optionalString vhost.default "default_server"
+ ";";
redirectListenString = { addr, ... }:
"listen ${addr}:80 ${optionalString vhost.default "default_server"};";
acmeLocation = ''
location /.well-known/acme-challenge {
${optionalString (vhost.acmeFallbackHost != null) "try_files $uri @acme-fallback;"}
root ${vhost.acmeRoot};
auth_basic off;
}
'' + (optionalString (vhost.acmeFallbackHost != null) ''
location @acme-fallback {
auth_basic off;
proxy_pass http://${vhost.acmeFallbackHost};
}
''));
${optionalString (vhost.acmeFallbackHost != null) ''
location @acme-fallback {
auth_basic off;
proxy_pass http://${vhost.acmeFallbackHost};
}
''}
'';
in ''
${optionalString vhost.forceSSL ''
server {
listen 80 ${optionalString vhost.default "default_server"};
${optionalString enableIPv6
''listen [::]:80 ${optionalString vhost.default "default_server"};''
}
${concatMapStringsSep "\n" redirectListenString vhost.listen}
server_name ${serverName} ${concatStringsSep " " vhost.serverAliases};
${acmeLocation}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation}
location / {
return 301 https://$host${optionalString (port != 443) ":${toString port}"}$request_uri;
return 301 https://$host$request_uri;
}
}
''}
server {
listen ${listenString};
${optionalString enableIPv6 "listen [::]:${listenString};"}
server_name ${serverName} ${concatStringsSep " " vhost.serverAliases};
${acmeLocation}
${concatMapStringsSep "\n" listenString vhost.listen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation}
${optionalString (vhost.root != null) "root ${vhost.root};"}
${optionalString (vhost.globalRedirect != null) ''
return 301 http${optionalString ssl "s"}://${vhost.globalRedirect}$request_uri;
@ -380,7 +385,7 @@ in
virtualHosts = mkOption {
type = types.attrsOf (types.submodule (import ./vhost-options.nix {
inherit lib;
inherit config lib;
}));
default = {
localhost = {};

View File

@ -3,7 +3,7 @@
# has additional options that affect the web server as a whole, like
# the user/group to run under.)
{ lib }:
{ config, lib }:
with lib;
{
@ -26,12 +26,26 @@ with lib;
'';
};
port = mkOption {
type = types.nullOr types.int;
default = null;
listen = mkOption {
type = with types; listOf (submodule {
options = {
addr = mkOption { type = str; description = "IP address."; };
port = mkOption { type = nullOr int; description = "Port number."; };
};
});
default =
[ { addr = "0.0.0.0"; port = null; } ]
++ optional config.networking.enableIPv6
{ addr = "[::]"; port = null; };
example = [
{ addr = "195.154.1.1"; port = 443; }
{ addr = "192.168.1.2"; port = 443; }
];
description = ''
Port for the server. Defaults to 80 for http
and 443 for https (i.e. when enableSSL is set).
Listen addresses and ports for this virtual host.
IPv6 addresses must be enclosed in square brackets.
Setting the port to <literal>null</literal> defaults
to 80 for http and 443 for https (i.e. when enableSSL is set).
'';
};

View File

@ -33,7 +33,6 @@ in
environment.systemPackages = [
pkgs.fluxbox
pkgs.libsForQt5.kwindowsystem
pkgs.kdeFrameworks.oxygen-icons5
pkgs.lumina
pkgs.numlockx
pkgs.qt5.qtsvg

View File

@ -71,7 +71,7 @@ let
name = "multihead${toString num}";
inherit config;
};
in imap mkHead cfg.xrandrHeads;
in imap1 mkHead cfg.xrandrHeads;
xrandrDeviceSection = let
monitors = flip map xrandrHeads (h: ''
@ -648,34 +648,53 @@ in
services.xserver.xkbDir = mkDefault "${pkgs.xkeyboard_config}/etc/X11/xkb";
system.extraDependencies = [
(pkgs.runCommand "xkb-layouts-exist" {
layouts=cfg.layout;
} ''
missing=()
while read -d , layout
do
[[ -f "${cfg.xkbDir}/symbols/$layout" ]] || missing+=($layout)
done <<< "$layouts,"
if [[ ''${#missing[@]} -eq 0 ]]
then
touch $out
exit 0
system.extraDependencies = singleton (pkgs.runCommand "xkb-layouts-exist" {
inherit (cfg) layout xkbDir;
} ''
# We can use the default IFS here, because the layouts won't contain
# spaces or tabs and are ruled out by the sed expression below.
availableLayouts="$(
sed -n -e ':i /^! \(layout\|variant\) *$/ {
# Loop through all of the layouts/variants until we hit another ! at
# the start of the line or the line is empty ('t' branches only if
# the last substitution was successful, so if the line is empty the
# substition will fail).
:l; n; /^!/bi; s/^ *\([^ ]\+\).*/\1/p; tl
}' "$xkbDir/rules/base.lst" | sort -u
)"
layoutNotFound() {
echo >&2
echo "The following layouts and variants are available:" >&2
echo >&2
# While an output width of 80 is more desirable for small terminals, we
# really don't know the amount of columns of the terminal from within
# the builder. The content in $availableLayouts however is pretty
# large, so let's opt for a larger width here, because it will print a
# smaller amount of lines on modern KMS/framebuffer terminals and won't
# lose information even in smaller terminals (it only will look a bit
# ugly).
echo "$availableLayouts" | ${pkgs.utillinux}/bin/column -c 150 >&2
echo >&2
echo "However, the keyboard layout definition in" \
"\`services.xserver.layout' contains the layout \`$1', which" \
"isn't a valid layout or variant." >&2
echo >&2
exit 1
}
# Again, we don't need to take care of IFS, see the comment for
# $availableLayouts.
for l in ''${layout//,/ }; do
if ! echo "$availableLayouts" | grep -qxF "$l"; then
layoutNotFound "$l"
fi
done
cat >&2 <<EOF
Some of the selected keyboard layouts do not exist:
''${missing[@]}
Set services.xserver.layout to the name of an existing keyboard
layout (check ${cfg.xkbDir}/symbols for options).
EOF
exit -1
'')
];
touch "$out"
'');
services.xserver.config =
''

View File

@ -11,35 +11,42 @@ import errno
import warnings
import ctypes
libc = ctypes.CDLL("libc.so.6")
import re
def copy_if_not_exists(source, dest):
if not os.path.exists(dest):
shutil.copyfile(source, dest)
def system_dir(generation):
return "/nix/var/nix/profiles/system-%d-link" % (generation)
def system_dir(profile, generation):
if profile:
return "/nix/var/nix/profiles/system-profiles/%s-%d-link" % (profile, generation)
else:
return "/nix/var/nix/profiles/system-%d-link" % (generation)
BOOT_ENTRY = """title NixOS
BOOT_ENTRY = """title NixOS{profile}
version Generation {generation}
linux {kernel}
initrd {initrd}
options {kernel_params}
"""
def write_loader_conf(generation):
def write_loader_conf(profile, generation):
with open("@efiSysMountPoint@/loader/loader.conf.tmp", 'w') as f:
if "@timeout@" != "":
f.write("timeout @timeout@\n")
f.write("default nixos-generation-%d\n" % generation)
if profile:
f.write("default nixos-%s-generation-%d\n" % (profile, generation))
else:
f.write("default nixos-generation-%d\n" % (generation))
if not @editor@:
f.write("editor 0");
os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf")
def profile_path(generation, name):
return os.readlink("%s/%s" % (system_dir(generation), name))
def profile_path(profile, generation, name):
return os.readlink("%s/%s" % (system_dir(profile, generation), name))
def copy_from_profile(generation, name, dry_run=False):
store_file_path = profile_path(generation, name)
def copy_from_profile(profile, generation, name, dry_run=False):
store_file_path = profile_path(profile, generation, name)
suffix = os.path.basename(store_file_path)
store_dir = os.path.basename(os.path.dirname(store_file_path))
efi_file_path = "/efi/nixos/%s-%s.efi" % (store_dir, suffix)
@ -47,22 +54,26 @@ def copy_from_profile(generation, name, dry_run=False):
copy_if_not_exists(store_file_path, "@efiSysMountPoint@%s" % (efi_file_path))
return efi_file_path
def write_entry(generation, machine_id):
kernel = copy_from_profile(generation, "kernel")
initrd = copy_from_profile(generation, "initrd")
def write_entry(profile, generation, machine_id):
kernel = copy_from_profile(profile, generation, "kernel")
initrd = copy_from_profile(profile, generation, "initrd")
try:
append_initrd_secrets = profile_path(generation, "append-initrd-secrets")
append_initrd_secrets = profile_path(profile, generation, "append-initrd-secrets")
subprocess.check_call([append_initrd_secrets, "@efiSysMountPoint@%s" % (initrd)])
except FileNotFoundError:
pass
entry_file = "@efiSysMountPoint@/loader/entries/nixos-generation-%d.conf" % (generation)
generation_dir = os.readlink(system_dir(generation))
if profile:
entry_file = "@efiSysMountPoint@/loader/entries/nixos-%s-generation-%d.conf" % (profile, generation)
else:
entry_file = "@efiSysMountPoint@/loader/entries/nixos-generation-%d.conf" % (generation)
generation_dir = os.readlink(system_dir(profile, generation))
tmp_path = "%s.tmp" % (entry_file)
kernel_params = "systemConfig=%s init=%s/init " % (generation_dir, generation_dir)
with open("%s/kernel-params" % (generation_dir)) as params_file:
kernel_params = kernel_params + params_file.read()
with open(tmp_path, 'w') as f:
f.write(BOOT_ENTRY.format(generation=generation,
f.write(BOOT_ENTRY.format(profile=" [" + profile + "]" if profile else "",
generation=generation,
kernel=kernel,
initrd=initrd,
kernel_params=kernel_params))
@ -77,29 +88,33 @@ def mkdir_p(path):
if e.errno != errno.EEXIST or not os.path.isdir(path):
raise
def get_generations(profile):
def get_generations(profile=None):
gen_list = subprocess.check_output([
"@nix@/bin/nix-env",
"--list-generations",
"-p",
"/nix/var/nix/profiles/%s" % (profile),
"/nix/var/nix/profiles/%s" % ("system-profiles/" + profile if profile else "system"),
"--option", "build-users-group", ""],
universal_newlines=True)
gen_lines = gen_list.split('\n')
gen_lines.pop()
return [ int(line.split()[0]) for line in gen_lines ]
return [ (profile, int(line.split()[0])) for line in gen_lines ]
def remove_old_entries(gens):
slice_start = len("@efiSysMountPoint@/loader/entries/nixos-generation-")
slice_end = -1 * len(".conf")
rex_profile = re.compile("^@efiSysMountPoint@/loader/entries/nixos-(.*)-generation-.*\.conf$")
rex_generation = re.compile("^@efiSysMountPoint@/loader/entries/nixos.*-generation-(.*)\.conf$")
known_paths = []
for gen in gens:
known_paths.append(copy_from_profile(gen, "kernel", True))
known_paths.append(copy_from_profile(gen, "initrd", True))
for path in glob.iglob("@efiSysMountPoint@/loader/entries/nixos-generation-[1-9]*.conf"):
known_paths.append(copy_from_profile(*gen, "kernel", True))
known_paths.append(copy_from_profile(*gen, "initrd", True))
for path in glob.iglob("@efiSysMountPoint@/loader/entries/nixos*-generation-[1-9]*.conf"):
try:
gen = int(path[slice_start:slice_end])
if not gen in gens:
if rex_profile.match(path):
prof = rex_profile.sub(r"\1", path)
else:
prof = "system"
gen = int(rex_generation.sub(r"\1", path))
if not (prof, gen) in gens:
os.unlink(path)
except ValueError:
pass
@ -107,6 +122,14 @@ def remove_old_entries(gens):
if not path in known_paths:
os.unlink(path)
def get_profiles():
if os.path.isdir("/nix/var/nix/profiles/system-profiles/"):
return [x
for x in os.listdir("/nix/var/nix/profiles/system-profiles/")
if not x.endswith("-link")]
else:
return []
def main():
parser = argparse.ArgumentParser(description='Update NixOS-related systemd-boot files')
parser.add_argument('default_config', metavar='DEFAULT-CONFIG', help='The default NixOS config to boot')
@ -141,12 +164,14 @@ def main():
mkdir_p("@efiSysMountPoint@/efi/nixos")
mkdir_p("@efiSysMountPoint@/loader/entries")
gens = get_generations("system")
gens = get_generations()
for profile in get_profiles():
gens += get_generations(profile)
remove_old_entries(gens)
for gen in gens:
write_entry(gen, machine_id)
if os.readlink(system_dir(gen)) == args.default_config:
write_loader_conf(gen)
write_entry(*gen, machine_id)
if os.readlink(system_dir(*gen)) == args.default_config:
write_loader_conf(*gen)
# Since fat32 provides little recovery facilities after a crash,
# it can leave the system in an unbootable state, when a crash/outage

View File

@ -241,7 +241,7 @@ in
description = ''
The encrypted disk that should be opened before the root
filesystem is mounted. Both LVM-over-LUKS and LUKS-over-LVM
setups are sypported. The unencrypted devices can be accessed as
setups are supported. The unencrypted devices can be accessed as
<filename>/dev/mapper/<replaceable>name</replaceable></filename>.
'';

Some files were not shown because too many files have changed in this diff Show More