Merge pull request #1 from NixOS/master

Get Master
This commit is contained in:
sjau 2018-08-24 12:06:37 +02:00 committed by GitHub
commit 17f152a18b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1998 changed files with 55086 additions and 44737 deletions

3
.github/CODEOWNERS vendored
View File

@ -21,7 +21,8 @@
/pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314
/pkgs/stdenv
/pkgs/stdenv/generic @Ericson2314
/pkgs/stdenv/cross @Ericson2314
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314

View File

@ -43,7 +43,7 @@ See the nixpkgs manual for more details on [standard meta-attributes](https://ni
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list archives, pull request discussions or upstream changes, it may require a lot of work.
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.

View File

@ -8,7 +8,7 @@ build daemon as so-called channels. To get channel information via git, add
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote:
```
% git remote add channels git://github.com/NixOS/nixpkgs-channels.git
% git remote add channels https://github.com/NixOS/nixpkgs-channels.git
```
For stability and maximum binary package support, it is recommended to maintain
@ -37,5 +37,5 @@ For pull-requests, please rebase onto nixpkgs `master`.
Communication:
* [Mailing list](https://groups.google.com/forum/#!forum/nix-devel)
* [Discourse Forum](https://discourse.nixos.org/)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)

View File

@ -1047,6 +1047,19 @@ As you can see, `packunused` finds out that although the testsuite component has
no redundant dependencies the library component of `scientific-0.3.5.1` depends
on `ghc-prim` which is unused in the library.
### Using hackage2nix with nixpkgs
Hackage package derivations are found in the
[`hackage-packages.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/hackage-packages.nix)
file within `nixpkgs` and are used as the initial package set for
`haskellPackages`. The `hackage-packages.nix` file is not meant to be edited
by hand, but rather autogenerated by [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix),
which by default uses the [`configuration-hackage2nix.yaml`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/configuration-hackage2nix.yaml)
file to generate all the derivations.
To modify the contents `configuration-hackage2nix.yaml`, follow the
instructions on [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix).
## Other resources
- The Youtube video [Nix Loves Haskell](https://www.youtube.com/watch?v=BsBhi_r-OeE)

View File

@ -15,13 +15,17 @@ stdenv.mkDerivation {
buildPhase = "ant";
}
</programlisting>
Note that <varname>jdk</varname> is an alias for the OpenJDK.
</para>
Note that <varname>jdk</varname> is an alias for the OpenJDK (self-built
where available, or pre-built via Zulu).
Platforms with OpenJDK not (yet) in Nixpkgs (<literal>Aarch32</literal>,
<literal>Aarch64</literal>) point to the (unfree)
<literal>oraclejdk</literal>.
</para>
<para>
JAR files that are intended to be used by other packages should be installed
in <filename>$out/share/java</filename>. The OpenJDK has a stdenv setup hook
that adds any JARs in the <filename>share/java</filename> directories of the
in <filename>$out/share/java</filename>. JDKs have a stdenv setup hook
that add any JARs in the <filename>share/java</filename> directories of the
build inputs to the <envar>CLASSPATH</envar> environment variable. For
instance, if the package <literal>libfoo</literal> installs a JAR named
<filename>foo.jar</filename> in its <filename>share/java</filename>
@ -57,7 +61,18 @@ installPhase =
<literal>${jre}/bin/java</literal> instead of
<literal>${jdk}/bin/java</literal>, you prevent your package from depending
on the JDK at runtime.
</para>
</para>
<para>
Note all JDKs passthru <literal>home</literal>, so if your application
requires environment variables like <envar>JAVA_HOME</envar> being set, that
can be done in a generic fashion with the <literal>--set</literal> argument
of <literal>makeWrapper</literal>:
<programlisting>
--set JAVA_HOME ${jdk.home}
</programlisting>
</para>
<para>
It is possible to use a different Java compiler than <command>javac</command>

View File

@ -59,6 +59,11 @@ all crate sources of this package. Currently it is obtained by inserting a
fake checksum into the expression and building the package once. The correct
checksum can be then take from the failed build.
When the `Cargo.lock`, provided by upstream, is not in sync with the
`Cargo.toml`, it is possible to use `cargoPatches` to update it. All patches
added in `cargoPatches` will also be prepended to the patches in `patches` at
build-time.
To install crates with nix there is also an experimental project called
[nixcrates](https://github.com/fractalide/nixcrates).

View File

@ -64,7 +64,7 @@ stdenv.mkDerivation {
sha256 = "1ian3kwh2vg6hr3ymrv48s04gijs539vzrq62xr76bxbhbwnz2np";
};
inherit noSysDirs;
configureFlags = "--target=arm-linux";
configureFlags = [ "--target=arm-linux" ];
}
---

View File

@ -705,4 +705,52 @@ overrides = super: self: rec {
</programlisting>
</para>
</section>
<section xml:id="sec-citrix">
<title>Citrix Receiver</title>
<para>
The <link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> is a remote
desktop viewer which provides access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the licenses agreements of the vendor
need to be accepted first. This is available at the
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">download page at citrix.com</link>.
Then run <literal>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</literal>.
With the archive available in the store the package can be built and installed with Nix.
</para>
<para>
<emphasis>Note: it's recommended to install <literal>Citrix Receiver</literal> using
<literal>nix-env -i</literal> or globally to ensure that the <literal>.desktop</literal> files
are installed properly into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't
be possible to open <literal>.ica</literal> files
automatically from the browser to start a Citrix connection.</emphasis>
</para>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> in <literal>nixpkgs</literal> trusts several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default.
However several companies using Citrix might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple
mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
</chapter>

View File

@ -29,6 +29,7 @@
}
</programlisting>
</listitem>
<listitem>
<para>
On darwin libraries are linked using absolute paths, libraries are
@ -46,6 +47,37 @@
}
</programlisting>
</listitem>
<listitem>
<para>
Even if the libraries are linked using absolute paths and resolved via
their <literal>install_name</literal> correctly, tests can sometimes fail
to run binaries. This happens because the <varname>checkPhase</varname>
runs before the libraries are installed.
</para>
<para>
This can usually be solved by running the tests after the
<varname>installPhase</varname> or alternatively by using
<varname>DYLD_LIBRARY_PATH</varname>. More information about this variable
can be found in the <citerefentry><refentrytitle>dyld</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> manpage.
</para>
<programlisting>
dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib
Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq
Reason: image not found
./tests/jqtest: line 5: 75779 Abort trap: 6
</programlisting>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
doInstallCheck = true;
installCheckTarget = "check";
}
</programlisting>
</listitem>
<listitem>
<para>
Some packages assume xcode is available and use <command>xcrun</command>

View File

@ -9,7 +9,7 @@
<para>
Checkout the Nixpkgs source tree:
<screen>
$ git clone git://github.com/NixOS/nixpkgs.git
$ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs</screen>
</para>
</listitem>

View File

@ -103,8 +103,9 @@
<itemizedlist>
<listitem>
<para>
mention-bot usually notifies GitHub users based on the submitted changes,
but it can happen that it misses some of the package maintainers.
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
@ -376,8 +377,9 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<itemizedlist>
<listitem>
<para>
Mention-bot notify GitHub users based on the submitted changes, but it
can happen that it miss some of the package maintainers.
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
@ -603,10 +605,11 @@ policy.
-->
<para>
In a case a contributor leaves definitively the Nix community, he should
create an issue or notify the mailing list with references of packages and
modules he maintains so the maintainership can be taken over by other
contributors.
In a case a contributor leaves definitively the Nix community, he
should create an issue or post on <link
xlink:href="https://discourse.nixos.org">Discourse</link> with
references of packages and modules he maintains so the
maintainership can be taken over by other contributors.
</para>
</section>
</chapter>

View File

@ -836,9 +836,10 @@ passthru = {
These can optionally be compressed using <command>gzip</command>
(<filename>.tar.gz</filename>, <filename>.tgz</filename> or
<filename>.tar.Z</filename>), <command>bzip2</command>
(<filename>.tar.bz2</filename> or <filename>.tbz2</filename>) or
<command>xz</command> (<filename>.tar.xz</filename> or
<filename>.tar.lzma</filename>).
(<filename>.tar.bz2</filename>, <filename>.tbz2</filename> or
<filename>.tbz</filename>) or <command>xz</command>
(<filename>.tar.xz</filename>, <filename>.tar.lzma</filename> or
<filename>.txz</filename>).
</para>
</listitem>
</varlistentry>

View File

@ -384,11 +384,12 @@ rec {
recursiveUpdateUntil = pred: lhs: rhs:
let f = attrPath:
zipAttrsWith (n: values:
let here = attrPath ++ [n]; in
if tail values == []
|| pred attrPath (head (tail values)) (head values) then
|| pred here (head (tail values)) (head values) then
head values
else
f (attrPath ++ [n]) values
f here values
);
in f [] [rhs lhs];

View File

@ -195,9 +195,10 @@ rec {
let self = f self // {
newScope = scope: newScope (self // scope);
callPackage = self.newScope {};
# TODO(@Ericson2314): Haromonize argument order of `g` with everything else
overrideScope = g:
makeScope newScope
(self_: let super = f self_; in super // g super self_);
(lib.fixedPoints.extends (lib.flip g) f);
packages = f;
};
in self;

View File

@ -80,7 +80,7 @@ let
inherit (strings) concatStrings concatMapStrings concatImapStrings
intersperse concatStringsSep concatMapStringsSep
concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath makePerlPath optionalString
makeLibraryPath makeBinPath makePerlPath makeFullPerlPath optionalString
hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString

View File

@ -210,6 +210,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Common Public License 1.0";
};
curl = {
fullName = "MIT/X11 derivate";
url = "https://curl.haxx.se/docs/copyright.html";
};
doc = spdx {
spdxId = "DOC";
fullName = "DOC License";
@ -613,6 +618,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Vim License";
};
virtualbox-puel = {
fullName = "Oracle VM VirtualBox Extension Pack Personal Use and Evaluation License (PUEL)";
url = "https://www.virtualbox.org/wiki/VirtualBox_PUEL";
free = false;
};
vsl10 = spdx {
spdxId = "VSL-1.0";
fullName = "Vovida Software License v1.0";
@ -643,6 +654,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "wxWindows Library Licence, Version 3.1";
};
xfig = {
fullName = "xfig";
url = "http://mcj.sourceforge.net/authors.html#xfig";
};
zlib = spdx {
spdxId = "Zlib";
fullName = "zlib License";

View File

@ -126,6 +126,15 @@ rec {
*/
makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl";
/* Construct a perl search path recursively including all dependencies (such as $PERL5LIB)
Example:
pkgs = import <nixpkgs> { }
makeFullPerlPath [ pkgs.perlPackages.CGI ]
=> "/nix/store/fddivfrdc1xql02h9q500fpnqy12c74n-perl-CGI-4.38/lib/perl5/site_perl:/nix/store/8hsvdalmsxqkjg0c5ifigpf31vc4vsy2-perl-HTML-Parser-3.72/lib/perl5/site_perl:/nix/store/zhc7wh0xl8hz3y3f71nhlw1559iyvzld-perl-HTML-Tagset-3.20/lib/perl5/site_perl"
*/
makeFullPerlPath = deps: makePerlPath (lib.misc.closePropagation deps);
/* Depending on the boolean `cond', return either the given string
or the empty string. Useful to concatenate against a bigger string.

View File

@ -44,5 +44,5 @@ in rec {
openbsd = filterDoubles predicates.isOpenBSD;
unix = filterDoubles predicates.isUnix;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux"];
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "powerpc64le-linux"];
}

View File

@ -8,6 +8,14 @@ rec {
#
# Linux
#
powernv = {
config = "powerpc64le-unknown-linux-gnu";
platform = platforms.powernv;
};
musl-power = {
config = "powerpc64le-unknown-linux-musl";
platform = platforms.powernv;
};
sheevaplug = rec {
config = "armv5tel-unknown-linux-gnueabi";

View File

@ -11,6 +11,7 @@ rec {
isi686 = { cpu = cpuTypes.i686; };
isx86_64 = { cpu = cpuTypes.x86_64; };
isPowerPC = { cpu = cpuTypes.powerpc; };
isPower = { cpu = { family = "power"; }; };
isx86 = { cpu = { family = "x86"; }; };
isAarch32 = { cpu = { family = "arm"; bits = 32; }; };
isAarch64 = { cpu = { family = "arm"; bits = 64; }; };

View File

@ -90,6 +90,8 @@ rec {
mips64el = { bits = 64; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "power"; };
powerpc64 = { bits = 64; significantByte = bigEndian; family = "power"; };
powerpc64le = { bits = 64; significantByte = littleEndian; family = "power"; };
riscv32 = { bits = 32; significantByte = littleEndian; family = "riscv"; };
riscv64 = { bits = 64; significantByte = littleEndian; family = "riscv"; };

View File

@ -20,6 +20,22 @@ rec {
kernelAutoModules = false;
};
powernv = {
name = "PowerNV";
kernelArch = "powerpc";
kernelBaseConfig = "powernv_defconfig";
kernelTarget = "zImage";
kernelInstallTarget = "install";
kernelFile = "vmlinux";
kernelAutoModules = true;
# avoid driver/FS trouble arising from unusual page size
kernelExtraConfig = ''
PPC_64K_PAGES n
PPC_4K_PAGES y
IPV6 y
'';
};
##
## ARM
##
@ -458,5 +474,6 @@ rec {
"armv7l-linux" = armv7l-hf-multiplatform;
"aarch64-linux" = aarch64-multiplatform;
"mipsel-linux" = fuloong2f_n32;
"powerpc64le-linux" = powernv;
}.${system} or pcBase;
}

View File

@ -213,6 +213,30 @@ runTests {
};
# ATTRSETS
# code from the example
testRecursiveUpdateUntil = {
expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) {
# first attribute set
foo.bar = 1;
foo.baz = 2;
bar = 3;
} {
#second attribute set
foo.bar = 1;
foo.quz = 2;
baz = 4;
};
expected = {
foo.bar = 1; # 'foo.*' from the second set
foo.quz = 2; #
bar = 3; # 'bar' from the first set
baz = 4; # 'baz' from the second set
};
};
# GENERATORS
# these tests assume attributes are converted to lists
# in alphabetical order

View File

@ -534,11 +534,21 @@
github = "bodil";
name = "Bodil Stokke";
};
boj = {
email = "brian@uncannyworks.com";
github = "boj";
name = "Brian Jones";
};
boothead = {
email = "ben@perurbis.com";
github = "boothead";
name = "Ben Ford";
};
borisbabic = {
email = "boris.ivan.babic@gmail.com";
github = "borisbabic";
name = "Boris Babić";
};
bosu = {
email = "boriss@gmail.com";
github = "bosu";
@ -663,6 +673,11 @@
github = "changlinli";
name = "Changlin Li";
};
CharlesHD = {
email = "charleshdespointes@gmail.com";
github = "CharlesHD";
name = "Charles Huyghues-Despointes";
};
chaoflow = {
email = "flo@chaoflow.net";
github = "chaoflow";
@ -802,6 +817,11 @@
github = "coroa";
name = "Jonas Hörsch";
};
costrouc = {
email = "chris.ostrouchov@gmail.com";
github = "costrouc";
name = "Chris Ostrouchov";
};
couchemar = {
email = "couchemar@yandex.ru";
github = "couchemar";
@ -921,11 +941,21 @@
github = "deepfire";
name = "Kosyrev Serge";
};
deltaevo = {
email = "deltaduartedavid@gmail.com";
github = "DeltaEvo";
name = "Duarte David";
};
demin-dmitriy = {
email = "demindf@gmail.com";
github = "demin-dmitriy";
name = "Dmitriy Demin";
};
demize = {
email = "johannes@kyriasis.com";
github = "kyrias";
name = "Johannes Löthberg";
};
demyanrogozhin = {
email = "demyan.rogozhin@gmail.com";
github = "demyanrogozhin";
@ -1362,6 +1392,11 @@
github = "fps";
name = "Florian Paul Schmidt";
};
freepotion = {
email = "freepotion@protonmail.com";
github = "freepotion";
name = "Free Potion";
};
Fresheyeball = {
email = "fresheyeball@gmail.com";
github = "fresheyeball";
@ -1560,6 +1595,11 @@
github = "havvy";
name = "Ryan Scheel";
};
hax404 = {
email = "hax404foogit@hax404.de";
github = "hax404";
name = "Georg Haas";
};
hbunke = {
email = "bunke.hendrik@gmail.com";
github = "hbunke";
@ -1659,6 +1699,11 @@
github = "ikervagyok";
name = "Balázs Lengyel";
};
illegalprime = {
email = "themichaeleden@gmail.com";
github = "illegalprime";
name = "Michael Eden";
};
ilya-kolpakov = {
email = "ilya.kolpakov@gmail.com";
github = "ilya-kolpakov";
@ -1674,6 +1719,11 @@
github = "imalsogreg";
name = "Greg Hale";
};
imuli = {
email = "i@imu.li";
github = "imuli";
name = "Imuli";
};
infinisil = {
email = "infinisil@icloud.com";
github = "infinisil";
@ -1822,6 +1872,11 @@
github = "jluttine";
name = "Jaakko Luttinen";
};
jmettes = {
email = "jonathan@jmettes.com";
github = "jmettes";
name = "Jonathan Mettes";
};
Jo = {
email = "0x4A6F@shackspace.de";
name = "Joachim Ernst";
@ -1890,6 +1945,11 @@
github = "jonafato";
name = "Jon Banafato";
};
jonathanreeve = {
email = "jon.reeve@gmail.com";
github = "JonathanReeve";
name = "Jonathan Reeve";
};
joncojonathan = {
email = "joncojonathan@gmail.com";
github = "joncojonathan";
@ -1910,6 +1970,11 @@
github = "jpotier";
name = "Martin Potier";
};
jqueiroz = {
email = "nixos@johnjq.com";
github = "jqueiroz";
name = "Jonathan Queiroz";
};
jraygauthier = {
email = "jraygauthier@gmail.com";
github = "jraygauthier";
@ -2079,6 +2144,11 @@
github = "kuznero";
name = "Roman Kuznetsov";
};
kylewlacy = {
email = "kylelacy+nix@pm.me";
github = "kylewlacy";
name = "Kyle Lacy";
};
lasandell = {
email = "lasandell@gmail.com";
github = "lasandell";
@ -2164,6 +2234,11 @@
github = "nathanielbaxter";
name = "Nathaniel Baxter";
};
lightdiscord = {
email = "root@arnaud.sh";
github = "lightdiscord";
name = "Arnaud Pascal";
};
lihop = {
email = "nixos@leroy.geek.nz";
github = "lihop";
@ -2259,6 +2334,11 @@
github = "luispedro";
name = "Luis Pedro Coelho";
};
lukeadams = {
email = "luke.adams@belljar.io";
github = "lukeadams";
name = "Luke Adams";
};
lukego = {
email = "luke@snabb.co";
github = "lukego";
@ -2822,10 +2902,10 @@
github = "nocoolnametom";
name = "Tom Doggett";
};
nonfreeblob = {
email = "nonfreeblob@yandex.com";
github = "nonfreeblob";
name = "nonfreeblob";
noneucat = {
email = "andy@lolc.at";
github = "noneucat";
name = "Andy Chun";
};
notthemessiah = {
email = "brian.cohen.88@gmail.com";
@ -3182,6 +3262,11 @@
email = "patrick.callahan@latitudeengineering.com";
name = "Patrick Callahan";
};
q3k = {
email = "q3k@q3k.org";
github = "q3k";
name = "Serge Bazanski";
};
qknight = {
email = "js@lastlog.de";
github = "qknight";
@ -3192,6 +3277,11 @@
github = "qoelet";
name = "Kenny Shen";
};
qyliss = {
email = "hi@alyssa.is";
github = "alyssais";
name = "Alyssa Ross";
};
ragge = {
email = "r.dahlen@gmail.com";
github = "ragnard";
@ -3226,6 +3316,11 @@
email = "ravloony@gmail.com";
name = "Tom Macdonald";
};
rawkode = {
email = "david.andrew.mckay@gmail.com";
github = "rawkode";
name = "David McKay";
};
razvan = {
email = "razvan.panda@gmail.com";
github = "razvan-panda";
@ -3663,6 +3758,11 @@
github = "s-na";
name = "S. Nordin Abouzahra";
};
snaar = {
email = "snaar@snaar.net";
github = "snaar";
name = "Serguei Narojnyi";
};
snyh = {
email = "snyh@snyh.org";
github = "snyh";
@ -3783,6 +3883,11 @@
github = "swarren83";
name = "Shawn Warren";
};
swdunlop = {
email = "swdunlop@gmail.com";
github = "swdunlop";
name = "Scott W. Dunlop";
};
swflint = {
email = "swflint@flintfam.org";
github = "swflint";

View File

@ -14,7 +14,7 @@
xlink:href="http://nixos.org/nixpkgs/manual">Nixpkgs
manual</link>. In short, you clone Nixpkgs:
<screen>
$ git clone git://github.com/NixOS/nixpkgs.git
$ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs
</screen>
Then you write and test the package as described in the Nixpkgs manual.

View File

@ -26,6 +26,7 @@
<xref linkend="opt-services.xserver.desktopManager.plasma5.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.xfce.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.gnome3.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.mate.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.xmonad.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.twm.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.icewm.enable"/> = true;

View File

@ -11,9 +11,9 @@
modify NixOS, however, you should check out the latest sources from Git. This
is as follows:
<screen>
$ git clone git://github.com/NixOS/nixpkgs.git
$ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs
$ git remote add channels git://github.com/NixOS/nixpkgs-channels.git
$ git remote add channels https://github.com/NixOS/nixpkgs-channels
$ git remote update channels
</screen>
This will check out the latest Nixpkgs sources to

View File

@ -326,10 +326,9 @@ Retype new UNIX password: ***
</screen>
<note>
<para>
To prevent the password prompt, set
<code><xref linkend="opt-users.mutableUsers"/> = false;</code> in
<filename>configuration.nix</filename>, which allows unattended
installation necessary in automation.
For unattended installations, it is possible to use
<command>nixos-install --no-root-passwd</command>
in order to disable the password prompt entirely.
</para>
</note>
</para>

View File

@ -17,8 +17,8 @@
<para>
If you encounter problems, please report them on the
<literal
xlink:href="https://groups.google.com/forum/#!forum/nix-devel">nix-devel</literal>
mailing list or on the <link
xlink:href="https://discourse.nixos.org">Discourse</literal>
or on the <link
xlink:href="irc://irc.freenode.net/#nixos">
<literal>#nixos</literal> channel on Freenode</link>. Bugs should be
reported in

View File

@ -73,6 +73,20 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
</para>
<itemizedlist>
<listitem>
<para>
The <varname>services.cassandra</varname> module has been reworked and
was rewritten from scratch. The service has succeeding tests for
the versions 2.1, 2.2, 3.0 and 3.11 of <link
xlink:href="https://cassandra.apache.org/">Apache Cassandra</link>.
</para>
</listitem>
<listitem>
<para>
There is a new <varname>services.foundationdb</varname> module for deploying
<link xlink:href="https://www.foundationdb.org">FoundationDB</link> clusters.
</para>
</listitem>
<listitem>
<para>
When enabled the <literal>iproute2</literal> will copy the files expected
@ -81,6 +95,22 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
routing tables for instance.
</para>
</listitem>
<listitem>
<para>
<varname>services.strongswan-swanctl</varname>
is a modern replacement for <varname>services.strongswan</varname>.
You can use either one of them to setup IPsec VPNs but not both at the same time.
</para>
<para>
<varname>services.strongswan-swanctl</varname> uses the
<link xlink:href="https://wiki.strongswan.org/projects/strongswan/wiki/swanctl">swanctl</link>
command which uses the modern
<link xlink:href="https://github.com/strongswan/strongswan/blob/master/src/libcharon/plugins/vici/README.md">vici</link>
<emphasis>Versatile IKE Configuration Interface</emphasis>.
The deprecated <literal>ipsec</literal> command used in <varname>services.strongswan</varname> is using the legacy
<link xlink:href="https://github.com/strongswan/strongswan/blob/master/README_LEGACY.md">stroke configuration interface</link>.
</para>
</listitem>
</itemizedlist>
</section>
@ -97,6 +127,12 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
</para>
<itemizedlist>
<listitem>
<para>
The deprecated <varname>services.cassandra</varname> module has
seen a complete rewrite. (See above.)
</para>
</listitem>
<listitem>
<para>
<literal>lib.strict</literal> is removed. Use
@ -154,6 +190,16 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
which indicates that the nix output hash will be used as tag.
</para>
</listitem>
<listitem>
<para>
Options
<literal>boot.initrd.luks.devices.<replaceable>name</replaceable>.yubikey.ramfsMountPoint</literal>
<literal>boot.initrd.luks.devices.<replaceable>name</replaceable>.yubikey.storage.mountPoint</literal>
were removed. <literal>luksroot.nix</literal> module never supported more than one YubiKey at
a time anyway, hence those options never had any effect. You should be able to remove them
from your config without any issues.
</para>
</listitem>
</itemizedlist>
</section>
@ -232,6 +278,8 @@ inherit (pkgs.nixos {
<literal>lib.traceCallXml</literal> has been deprecated. Please complain
if you use the function regularly.
</para>
</listitem>
<listitem>
<para>
The attribute <literal>lib.nixpkgsVersion</literal> has been deprecated in
favor of <literal>lib.version</literal>. Please refer to the discussion in
@ -239,6 +287,13 @@ inherit (pkgs.nixos {
for further reference.
</para>
</listitem>
<listitem>
<para>
<literal>lib.recursiveUpdateUntil</literal> was not acting according to its
specification. It has been fixed to act according to the docstring, and a
test has been added.
</para>
</listitem>
<listitem>
<para>
The module for <option>security.dhparams</option> has two new options now:
@ -370,7 +425,12 @@ inherit (pkgs.nixos {
<varname>s6-dns</varname>, <varname>s6-networking</varname>,
<varname>s6-linux-utils</varname> and <varname>s6-portable-utils</varname> respectively.
</para>
</listitem>
</listitem>
<listitem>
<para>
The module option <option>nix.useSandbox</option> is now defaulted to <literal>true</literal>.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -6,16 +6,19 @@
, storePaths
, volumeLabel
, uuid ? "44444444-4444-4444-8888-888888888888"
, e2fsprogs
, libfaketime
, perl
}:
let
sdClosureInfo = pkgs.closureInfo { rootPaths = storePaths; };
sdClosureInfo = pkgs.buildPackages.closureInfo { rootPaths = storePaths; };
in
pkgs.stdenv.mkDerivation {
name = "ext4-fs.img";
nativeBuildInputs = with pkgs; [e2fsprogs.bin libfaketime perl];
nativeBuildInputs = [e2fsprogs.bin libfaketime perl];
buildCommand =
''

View File

@ -70,7 +70,7 @@ in
description = ''
Shell script code called during global environment initialisation
after all variables and profileVariables have been set.
This code is asumed to be shell-independent, which means you should
This code is assumed to be shell-independent, which means you should
stick to pure sh without sh word split.
'';
type = types.lines;

View File

@ -29,8 +29,5 @@ with lib;
# Add Memtest86+ to the CD.
boot.loader.grub.memtest86.enable = true;
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
system.stateVersion = mkDefault "18.03";
}

View File

@ -318,7 +318,7 @@ in
options = [ "allow_other" "cow" "nonempty" "chroot=/mnt-root" "max_files=32768" "hide_meta_files" "dirs=/nix/.rw-store=rw:/nix/.ro-store=ro" ];
};
boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "usb-storage" "uas" ];
boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "uas" ];
boot.blacklistedKernelModules = [ "nouveau" ];

View File

@ -33,9 +33,6 @@ in
# Also increase the amount of CMA to ensure the virtual console on the RPi3 works.
boot.kernelParams = ["cma=32M" "console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"];
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = {
populateBootCommands = let
configTxt = pkgs.writeText "config.txt" ''

View File

@ -34,9 +34,6 @@ in
# - ttySAC2: for Exynos (ODROID-XU3)
boot.kernelParams = ["console=ttyS0,115200n8" "console=ttymxc0,115200n8" "console=ttyAMA0,115200n8" "console=ttyO0,115200n8" "console=ttySAC2,115200n8" "console=tty0"];
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = {
populateBootCommands = let
configTxt = pkgs.writeText "config.txt" ''

View File

@ -27,9 +27,6 @@ in
boot.consoleLogLevel = lib.mkDefault 7;
boot.kernelPackages = pkgs.linuxPackages_rpi;
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = {
populateBootCommands = let
configTxt = pkgs.writeText "config.txt" ''

View File

@ -12,13 +12,12 @@
with lib;
let
rootfsImage = import ../../../lib/make-ext4-fs.nix {
inherit pkgs;
rootfsImage = pkgs.callPackage ../../../lib/make-ext4-fs.nix ({
inherit (config.sdImage) storePaths;
volumeLabel = "NIXOS_SD";
} // optionalAttrs (config.sdImage.rootPartitionUUID != null) {
uuid = config.sdImage.rootPartitionUUID;
};
});
in
{
options.sdImage = {
@ -94,10 +93,10 @@ in
sdImage.storePaths = [ config.system.build.toplevel ];
system.build.sdImage = pkgs.stdenv.mkDerivation {
system.build.sdImage = pkgs.callPackage ({ stdenv, dosfstools, e2fsprogs, mtools, libfaketime, utillinux }: stdenv.mkDerivation {
name = config.sdImage.imageName;
buildInputs = with pkgs; [ dosfstools e2fsprogs mtools libfaketime utillinux ];
nativeBuildInputs = [ dosfstools e2fsprogs mtools libfaketime utillinux ];
buildCommand = ''
mkdir -p $out/nix-support $out/sd-image
@ -138,7 +137,7 @@ in
(cd boot; mcopy -bpsvm -i ../bootpart.img ./* ::)
dd conv=notrunc if=bootpart.img of=$img seek=$START count=$SECTORS
'';
};
}) {};
boot.postBootCommands = ''
# On the first boot do some maintenance tasks

View File

@ -14,7 +14,4 @@ with lib;
../../profiles/base.nix
../../profiles/installation-device.nix
];
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
}

View File

@ -28,7 +28,6 @@ with lib;
++ (if pkgs.stdenv.system == "aarch64-linux"
then []
else [ pkgs.grub2 pkgs.syslinux ]);
system.boot.loader.kernelFile = pkgs.stdenv.platform.kernelTarget;
fileSystems."/" =
{ fsType = "tmpfs";
@ -86,7 +85,7 @@ with lib;
system.build.netbootIpxeScript = pkgs.writeTextDir "netboot.ipxe" ''
#!ipxe
kernel ${pkgs.stdenv.platform.kernelTarget} init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}
kernel ${pkgs.stdenv.hostPlatform.platform.kernelTarget} init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}
initrd initrd
boot
'';

View File

@ -536,6 +536,13 @@ if ($showHardwareConfig) {
# Use the systemd-boot EFI boot loader.
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
EOF
} elsif (-e "/boot/extlinux") {
$bootLoaderConfig = <<EOF;
# Use the extlinux boot loader. (NixOS wants to enable GRUB by default)
boot.loader.grub.enable = false;
# Enables the generation of /boot/extlinux/extlinux.conf
boot.loader.generic-extlinux-compatible.enable = true;
EOF
} elsif ($virt ne "systemd-nspawn") {
$bootLoaderConfig = <<EOF;

View File

@ -323,6 +323,9 @@
mapred = 296;
hadoop = 297;
hydron = 298;
cfssl = 299;
cassandra = 300;
qemu-libvirtd = 301;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -606,6 +609,9 @@
mapred = 296;
hadoop = 297;
hydron = 298;
cfssl = 299;
cassandra = 300;
qemu-libvirtd = 301;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View File

@ -76,9 +76,6 @@ in
config = {
warnings = lib.optional (options.system.stateVersion.highestPrio > 1000)
"You don't have `system.stateVersion` explicitly set. Expect things to break.";
system.nixos = {
# These defaults are set here rather than up there so that
# changing them would not rebuild the manual

View File

@ -201,6 +201,7 @@
./services/databases/4store-endpoint.nix
./services/databases/4store.nix
./services/databases/aerospike.nix
./services/databases/cassandra.nix
./services/databases/clickhouse.nix
./services/databases/couchdb.nix
./services/databases/firebird.nix
@ -246,6 +247,7 @@
./services/desktops/gnome3/tracker-miners.nix
./services/desktops/profile-sync-daemon.nix
./services/desktops/telepathy.nix
./services/desktops/zeitgeist.nix
./services/development/bloop.nix
./services/development/hoogle.nix
./services/editors/emacs.nix
@ -279,6 +281,7 @@
./services/hardware/upower.nix
./services/hardware/usbmuxd.nix
./services/hardware/thermald.nix
./services/hardware/undervolt.nix
./services/logging/SystemdJournal2Gelf.nix
./services/logging/awstats.nix
./services/logging/fluentd.nix
@ -406,6 +409,7 @@
./services/monitoring/cadvisor.nix
./services/monitoring/collectd.nix
./services/monitoring/das_watchdog.nix
./services/monitoring/datadog-agent.nix
./services/monitoring/dd-agent/dd-agent.nix
./services/monitoring/fusion-inventory.nix
./services/monitoring/grafana.nix
@ -543,6 +547,7 @@
./services/networking/ntopng.nix
./services/networking/ntpd.nix
./services/networking/nylon.nix
./services/networking/ocserv.nix
./services/networking/oidentd.nix
./services/networking/openfire.nix
./services/networking/openntpd.nix
@ -621,6 +626,8 @@
./services/search/hound.nix
./services/search/kibana.nix
./services/search/solr.nix
./services/security/certmgr.nix
./services/security/cfssl.nix
./services/security/clamav.nix
./services/security/fail2ban.nix
./services/security/fprintd.nix

View File

@ -33,7 +33,7 @@
# USB support, especially for booting from USB CD-ROM
# drives.
"usb_storage"
"uas"
# Firewire support. Not tested.
"ohci1394" "sbp2"

View File

@ -31,7 +31,8 @@ with lib;
#services.rogue.enable = true;
# Disable some other stuff we don't need.
security.sudo.enable = false;
security.sudo.enable = mkDefault false;
services.udisks2.enable = mkDefault false;
# Automatically log in at the virtual consoles.
services.mingetty.autologinUser = "root";
@ -86,5 +87,9 @@ with lib;
networking.firewall.logRefusedConnections = mkDefault false;
environment.systemPackages = [ pkgs.vim ];
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
};
}

View File

@ -3,7 +3,30 @@
with lib;
let
cfg = config.programs.zsh.ohMyZsh;
mkLinkFarmEntry = name: dir:
let
env = pkgs.buildEnv {
name = "zsh-${name}-env";
paths = cfg.customPkgs;
pathsToLink = "/share/zsh/${dir}";
};
in
{ inherit name; path = "${env}/share/zsh/${dir}"; };
mkLinkFarmEntry' = name: mkLinkFarmEntry name name;
custom =
if cfg.custom != null then cfg.custom
else if length cfg.customPkgs == 0 then null
else pkgs.linkFarm "oh-my-zsh-custom" [
(mkLinkFarmEntry' "themes")
(mkLinkFarmEntry "completions" "site-functions")
(mkLinkFarmEntry' "plugins")
];
in
{
options = {
@ -34,10 +57,19 @@ in
};
custom = mkOption {
default = "";
type = types.str;
default = null;
type = with types; nullOr str;
description = ''
Path to a custom oh-my-zsh package to override config of oh-my-zsh.
(Can't be used along with `customPkgs`).
'';
};
customPkgs = mkOption {
default = [];
type = types.listOf types.package;
description = ''
List of custom packages that should be loaded into `oh-my-zsh`.
'';
};
@ -67,7 +99,7 @@ in
environment.systemPackages = [ cfg.package ];
programs.zsh.interactiveShellInit = with builtins; ''
programs.zsh.interactiveShellInit = ''
# oh-my-zsh configuration generated by NixOS
export ZSH=${cfg.package}/share/oh-my-zsh
@ -75,8 +107,8 @@ in
"plugins=(${concatStringsSep " " cfg.plugins})"
}
${optionalString (stringLength(cfg.custom) > 0)
"ZSH_CUSTOM=\"${cfg.custom}\""
${optionalString (custom != null)
"ZSH_CUSTOM=\"${custom}\""
}
${optionalString (stringLength(cfg.theme) > 0)
@ -92,5 +124,15 @@ in
source $ZSH/oh-my-zsh.sh
'';
assertions = [
{
assertion = cfg.custom != null -> cfg.customPkgs == [];
message = "If `cfg.custom` is set for `ZSH_CUSTOM`, `customPkgs` can't be used!";
}
];
};
meta.doc = ./oh-my-zsh.xml;
}

View File

@ -0,0 +1,125 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-programs-zsh-ohmyzsh">
<title>Oh my ZSH</title>
<para><literal><link xlink:href="https://ohmyz.sh/">oh-my-zsh</link></literal> is a framework
to manage your <link xlink:href="https://www.zsh.org/">ZSH</link> configuration
including completion scripts for several CLI tools or custom prompt themes.</para>
<section><title>Basic usage</title>
<para>The module uses the <literal>oh-my-zsh</literal> package with all available features. The
initial setup using Nix expressions is fairly similar to the configuration format
of <literal>oh-my-zsh</literal>.
<programlisting>
{
programs.ohMyZsh = {
enable = true;
plugins = [ "git" "python" "man" ];
theme = "agnoster";
};
}
</programlisting>
For a detailed explanation of these arguments please refer to the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki"><literal>oh-my-zsh</literal> docs</link>.
</para>
<para>The expression generates the needed
configuration and writes it into your <literal>/etc/zshrc</literal>.
</para></section>
<section><title>Custom additions</title>
<para>Sometimes third-party or custom scripts such as a modified theme may be needed.
<literal>oh-my-zsh</literal> provides the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-internals"><literal>ZSH_CUSTOM</literal></link>
environment variable for this which points to a directory with additional scripts.</para>
<para>The module can do this as well:
<programlisting>
{
programs.ohMyZsh.custom = "~/path/to/custom/scripts";
}
</programlisting>
</para></section>
<section><title>Custom environments</title>
<para>There are several extensions for <literal>oh-my-zsh</literal> packaged in <literal>nixpkgs</literal>.
One of them is <link xlink:href="https://github.com/spwhitt/nix-zsh-completions">nix-zsh-completions</link>
which bundles completion scripts and a plugin for <literal>oh-my-zsh</literal>.</para>
<para>Rather than using a single mutable path for <literal>ZSH_CUSTOM</literal>, it's also possible to
generate this path from a list of Nix packages:
<programlisting>
{ pkgs, ... }:
{
programs.ohMyZsh.customPkgs = with pkgs; [
pkgs.nix-zsh-completions
# and even more...
];
}
</programlisting>
Internally a single store path will be created using <literal>buildEnv</literal>.
Please refer to the docs of
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-building-environment"><literal>buildEnv</literal></link>
for further reference.</para>
<para><emphasis>Please keep in mind that this is not compatible with <literal>programs.ohMyZsh.custom</literal>
as it requires an immutable store path while <literal>custom</literal> shall remain mutable! An evaluation failure
will be thrown if both <literal>custom</literal> and <literal>customPkgs</literal> are set.</emphasis>
</para></section>
<section><title>Package your own customizations</title>
<para>If third-party customizations (e.g. new themes) are supposed to be added to <literal>oh-my-zsh</literal>
there are several pitfalls to keep in mind:</para>
<itemizedlist>
<listitem>
<para>To comply with the default structure of <literal>ZSH</literal> the entire output needs to be written to
<literal>$out/share/zsh.</literal></para>
</listitem>
<listitem>
<para>Completion scripts are supposed to be stored at <literal>$out/share/zsh/site-functions</literal>. This directory
is part of the <literal><link xlink:href="http://zsh.sourceforge.net/Doc/Release/Functions.html">fpath</link></literal>
and the package should be compatible with pure <literal>ZSH</literal> setups. The module will automatically link
the contents of <literal>site-functions</literal> to completions directory in the proper store path.</para>
</listitem>
<listitem>
<para>The <literal>plugins</literal> directory needs the structure <literal>pluginname/pluginname.plugin.zsh</literal>
as structured in the <link xlink:href="https://github.com/robbyrussell/oh-my-zsh/tree/91b771914bc7c43dd7c7a43b586c5de2c225ceb7/plugins">upstream repo.</link>
</para>
</listitem>
</itemizedlist>
<para>
A derivation for <literal>oh-my-zsh</literal> may look like this:
<programlisting>
{ stdenv, fetchFromGitHub }:
stdenv.mkDerivation rec {
name = "exemplary-zsh-customization-${version}";
version = "1.0.0";
src = fetchFromGitHub {
# path to the upstream repository
};
dontBuild = true;
installPhase = ''
mkdir -p $out/share/zsh/site-functions
cp {themes,plugins} $out/share/zsh
cp completions $out/share/zsh/site-functions
'';
}
</programlisting>
</para>
</section>
</chapter>

View File

@ -9,7 +9,6 @@ with lib;
(mkRenamedOptionModule [ "system" "nixos" "stateVersion" ] [ "system" "stateVersion" ])
(mkRenamedOptionModule [ "system" "nixos" "defaultChannel" ] [ "system" "defaultChannel" ])
(mkRenamedOptionModule [ "dysnomia" ] [ "services" "dysnomia" ])
(mkRenamedOptionModule [ "environment" "x11Packages" ] [ "environment" "systemPackages" ])
(mkRenamedOptionModule [ "environment" "enableBashCompletion" ] [ "programs" "bash" "enableCompletion" ])
(mkRenamedOptionModule [ "environment" "nix" ] [ "nix" "package" ])
@ -257,6 +256,7 @@ with lib;
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
(mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.")
(mkRemovedOptionModule [ "boot" "zfs" "enableLegacyCrypto" ] "The corresponding package was removed from nixpkgs.")
# ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])

View File

@ -55,11 +55,11 @@ in {
};
musicDirectory = mkOption {
type = types.path;
type = with types; either path (strMatching "(http|https|nfs|smb)://.+");
default = "${cfg.dataDir}/music";
defaultText = ''''${dataDir}/music'';
description = ''
The directory where mpd reads music from.
The directory or NFS/SMB network share where mpd reads music from.
'';
};

View File

@ -18,6 +18,7 @@ with lib;
s3CredentialsFile = mkOption {
type = with types; nullOr str;
default = null;
description = ''
file containing the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
for an S3-hosted repository, in the format of an EnvironmentFile

View File

@ -838,6 +838,8 @@ in {
path = with pkgs; [ gitMinimal openssh docker utillinux iproute ethtool thin-provisioning-tools iptables socat ] ++ cfg.path;
serviceConfig = {
Slice = "kubernetes.slice";
CPUAccounting = true;
MemoryAccounting = true;
ExecStart = ''${cfg.package}/bin/kubelet \
${optionalString (taints != "")
"--register-with-taints=${taints}"} \

View File

@ -4,445 +4,288 @@ with lib;
let
cfg = config.services.cassandra;
cassandraPackage = cfg.package.override {
jre = cfg.jre;
};
cassandraUser = {
name = cfg.user;
home = "/var/lib/cassandra";
description = "Cassandra role user";
};
cassandraRackDcProperties = ''
dc=${cfg.dc}
rack=${cfg.rack}
'';
cassandraConf = ''
cluster_name: ${cfg.clusterName}
num_tokens: 256
auto_bootstrap: ${boolToString cfg.autoBootstrap}
hinted_handoff_enabled: ${boolToString cfg.hintedHandOff}
hinted_handoff_throttle_in_kb: ${builtins.toString cfg.hintedHandOffThrottle}
max_hints_delivery_threads: 2
max_hint_window_in_ms: 10800000 # 3 hours
authenticator: ${cfg.authenticator}
authorizer: ${cfg.authorizer}
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
${builtins.concatStringsSep "\n" (map (v: " - "+v) cfg.dataDirs)}
commitlog_directory: ${cfg.commitLogDirectory}
disk_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
saved_caches_directory: ${cfg.savedCachesDirectory}
commitlog_sync: ${cfg.commitLogSync}
commitlog_sync_period_in_ms: ${builtins.toString cfg.commitLogSyncPeriod}
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "${builtins.concatStringsSep "," cfg.seeds}"
concurrent_reads: ${builtins.toString cfg.concurrentReads}
concurrent_writes: ${builtins.toString cfg.concurrentWrites}
memtable_flush_queue_size: 4
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: ${cfg.listenAddress}
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: ${cfg.rpcAddress}
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: ${boolToString cfg.incrementalBackups}
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 10000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: ${cfg.snitch}
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: ${cfg.internodeEncryption}
keystore: ${cfg.keyStorePath}
keystore_password: ${cfg.keyStorePassword}
truststore: ${cfg.trustStorePath}
truststore_password: ${cfg.trustStorePassword}
client_encryption_options:
enabled: ${boolToString cfg.clientEncryption}
keystore: ${cfg.keyStorePath}
keystore_password: ${cfg.keyStorePassword}
internode_compression: all
inter_dc_tcp_nodelay: false
preheat_kernel_page_cache: false
streaming_socket_timeout_in_ms: ${toString cfg.streamingSocketTimoutInMS}
'';
cassandraLog = ''
log4j.rootLogger=${cfg.logLevel},stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] %d{HH:mm:ss,SSS} %m%n
'';
cassandraConfFile = pkgs.writeText "cassandra.yaml" cassandraConf;
cassandraLogFile = pkgs.writeText "log4j-server.properties" cassandraLog;
cassandraRackFile = pkgs.writeText "cassandra-rackdc.properties" cassandraRackDcProperties;
cassandraEnvironment = {
CASSANDRA_HOME = cassandraPackage;
JAVA_HOME = cfg.jre;
CASSANDRA_CONF = "/etc/cassandra";
};
defaultUser = "cassandra";
cassandraConfig = flip recursiveUpdate cfg.extraConfig
({ commitlog_sync = "batch";
commitlog_sync_batch_window_in_ms = 2;
partitioner = "org.apache.cassandra.dht.Murmur3Partitioner";
endpoint_snitch = "SimpleSnitch";
seed_provider =
[{ class_name = "org.apache.cassandra.locator.SimpleSeedProvider";
parameters = [ { seeds = "127.0.0.1"; } ];
}];
data_file_directories = [ "${cfg.homeDir}/data" ];
commitlog_directory = "${cfg.homeDir}/commitlog";
saved_caches_directory = "${cfg.homeDir}/saved_caches";
} // (if builtins.compareVersions cfg.package.version "3" >= 0
then { hints_directory = "${cfg.homeDir}/hints"; }
else {})
);
cassandraConfigWithAddresses = cassandraConfig //
( if isNull cfg.listenAddress
then { listen_interface = cfg.listenInterface; }
else { listen_address = cfg.listenAddress; }
) // (
if isNull cfg.rpcAddress
then { rpc_interface = cfg.rpcInterface; }
else { rpc_address = cfg.rpcAddress; }
);
cassandraEtc = pkgs.stdenv.mkDerivation
{ name = "cassandra-etc";
cassandraYaml = builtins.toJSON cassandraConfigWithAddresses;
cassandraEnvPkg = "${cfg.package}/conf/cassandra-env.sh";
buildCommand = ''
mkdir -p "$out"
echo "$cassandraYaml" > "$out/cassandra.yaml"
ln -s "$cassandraEnvPkg" "$out/cassandra-env.sh"
'';
};
in {
###### interface
options.services.cassandra = {
enable = mkOption {
description = "Whether to enable cassandra.";
default = false;
type = types.bool;
};
package = mkOption {
description = "Cassandra package to use.";
default = pkgs.cassandra;
defaultText = "pkgs.cassandra";
type = types.package;
};
jre = mkOption {
description = "JRE package to run cassandra service.";
default = pkgs.jre;
defaultText = "pkgs.jre";
type = types.package;
};
enable = mkEnableOption ''
Apache Cassandra Scalable and highly available database.
'';
user = mkOption {
description = "User that runs cassandra service.";
default = "cassandra";
type = types.string;
type = types.str;
default = defaultUser;
description = "Run Apache Cassandra under this user.";
};
group = mkOption {
description = "Group that runs cassandra service.";
default = "cassandra";
type = types.string;
};
envFile = mkOption {
description = "path to cassandra-env.sh";
default = "${cassandraPackage}/conf/cassandra-env.sh";
defaultText = "\${cassandraPackage}/conf/cassandra-env.sh";
type = types.path;
};
clusterName = mkOption {
description = "set cluster name";
default = "cassandra";
example = "prod-cluster0";
type = types.string;
};
commitLogDirectory = mkOption {
description = "directory for commit logs";
default = "/var/lib/cassandra/commit_log";
type = types.string;
};
savedCachesDirectory = mkOption {
description = "directory for saved caches";
default = "/var/lib/cassandra/saved_caches";
type = types.string;
};
hintedHandOff = mkOption {
description = "enable hinted handoff";
default = true;
type = types.bool;
};
hintedHandOffThrottle = mkOption {
description = "hinted hand off throttle rate in kb";
default = 1024;
type = types.int;
};
commitLogSync = mkOption {
description = "commitlog sync method";
default = "periodic";
type = types.str;
example = "batch";
default = defaultUser;
description = "Run Apache Cassandra under this group.";
};
commitLogSyncPeriod = mkOption {
description = "commitlog sync period in ms ";
default = 10000;
type = types.int;
};
envScript = mkOption {
default = "${cassandraPackage}/conf/cassandra-env.sh";
defaultText = "\${cassandraPackage}/conf/cassandra-env.sh";
homeDir = mkOption {
type = types.path;
description = "Supply your own cassandra-env.sh rather than using the default";
default = "/var/lib/cassandra";
description = ''
Home directory for Apache Cassandra.
'';
};
extraParams = mkOption {
description = "add additional lines to cassandra-env.sh";
package = mkOption {
type = types.package;
default = pkgs.cassandra;
defaultText = "pkgs.cassandra";
example = literalExample "pkgs.cassandra_3_11";
description = ''
The Apache Cassandra package to use.
'';
};
jvmOpts = mkOption {
type = types.listOf types.str;
default = [];
example = [''JVM_OPTS="$JVM_OPTS -Dcassandra.available_processors=1"''];
type = types.listOf types.str;
};
dataDirs = mkOption {
type = types.listOf types.path;
default = [ "/var/lib/cassandra/data" ];
description = "Data directories for cassandra";
};
logLevel = mkOption {
type = types.str;
default = "INFO";
description = "default logging level for log4j";
};
internodeEncryption = mkOption {
description = "enable internode encryption";
default = "none";
example = "all";
type = types.str;
};
clientEncryption = mkOption {
description = "enable client encryption";
default = false;
type = types.bool;
};
trustStorePath = mkOption {
description = "path to truststore";
default = ".conf/truststore";
type = types.str;
};
keyStorePath = mkOption {
description = "path to keystore";
default = ".conf/keystore";
type = types.str;
};
keyStorePassword = mkOption {
description = "password to keystore";
default = "cassandra";
type = types.str;
};
trustStorePassword = mkOption {
description = "password to truststore";
default = "cassandra";
type = types.str;
};
seeds = mkOption {
description = "password to truststore";
default = [ "127.0.0.1" ];
type = types.listOf types.str;
};
concurrentWrites = mkOption {
description = "number of concurrent writes allowed";
default = 32;
type = types.int;
};
concurrentReads = mkOption {
description = "number of concurrent reads allowed";
default = 32;
type = types.int;
description = ''
Populate the JVM_OPT environment variable.
'';
};
listenAddress = mkOption {
description = "listen address";
default = "localhost";
type = types.str;
type = types.nullOr types.str;
default = "127.0.0.1";
example = literalExample "null";
description = ''
Address or interface to bind to and tell other Cassandra nodes
to connect to. You _must_ change this if you want multiple
nodes to be able to communicate!
Set listenAddress OR listenInterface, not both.
Leaving it blank leaves it up to
InetAddress.getLocalHost(). This will always do the Right
Thing _if_ the node is properly configured (hostname, name
resolution, etc), and the Right Thing is to use the address
associated with the hostname (it might not be).
Setting listen_address to 0.0.0.0 is always wrong.
'';
};
listenInterface = mkOption {
type = types.nullOr types.str;
default = null;
example = "eth1";
description = ''
Set listenAddress OR listenInterface, not both. Interfaces
must correspond to a single address, IP aliasing is not
supported.
'';
};
rpcAddress = mkOption {
description = "rpc listener address";
default = "localhost";
type = types.str;
};
incrementalBackups = mkOption {
description = "enable incremental backups";
default = false;
type = types.bool;
};
snitch = mkOption {
description = "snitch to use for topology discovery";
default = "GossipingPropertyFileSnitch";
example = "Ec2Snitch";
type = types.str;
};
dc = mkOption {
description = "datacenter for use in topology configuration";
default = "DC1";
example = "DC1";
type = types.str;
};
rack = mkOption {
description = "rack for use in topology configuration";
default = "RAC1";
example = "RAC1";
type = types.str;
};
authorizer = mkOption {
description = "
Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
";
default = "AllowAllAuthorizer";
example = "CassandraAuthorizer";
type = types.str;
};
authenticator = mkOption {
description = "
Authentication backend, implementing IAuthenticator; used to identify users
";
default = "AllowAllAuthenticator";
example = "PasswordAuthenticator";
type = types.str;
};
autoBootstrap = mkOption {
description = "It makes new (non-seed) nodes automatically migrate the right data to themselves.";
default = true;
type = types.bool;
};
streamingSocketTimoutInMS = mkOption {
description = "Enable or disable socket timeout for streaming operations";
default = 3600000; #CASSANDRA-8611
example = 120;
type = types.int;
};
repairStartAt = mkOption {
default = "Sun";
type = types.string;
type = types.nullOr types.str;
default = "127.0.0.1";
example = literalExample "null";
description = ''
Defines realtime (i.e. wallclock) timers with calendar event
expressions. For more details re: systemd OnCalendar at
https://www.freedesktop.org/software/systemd/man/systemd.time.html#Displaying%20Time%20Spans
'';
example = ["weekly" "daily" "08:05:40" "mon,fri *-1/2-1,3 *:30:45"];
};
repairRandomizedDelayInSec = mkOption {
default = 0;
type = types.int;
description = ''Delay the timer by a randomly selected, evenly distributed
amount of time between 0 and the specified time value. re: systemd timer
RandomizedDelaySec for more details
The address or interface to bind the native transport server to.
Set rpcAddress OR rpcInterface, not both.
Leaving rpcAddress blank has the same effect as on
listenAddress (i.e. it will be based on the configured hostname
of the node).
Note that unlike listenAddress, you can specify 0.0.0.0, but you
must also set extraConfig.broadcast_rpc_address to a value other
than 0.0.0.0.
For security reasons, you should not expose this port to the
internet. Firewall it if needed.
'';
};
repairPostStop = mkOption {
rpcInterface = mkOption {
type = types.nullOr types.str;
default = null;
type = types.nullOr types.string;
example = "eth1";
description = ''
Run a script when repair is over. One can use it to send statsd events, email, etc.
Set rpcAddress OR rpcInterface, not both. Interfaces must
correspond to a single address, IP aliasing is not supported.
'';
};
repairPostStart = mkOption {
default = null;
type = types.nullOr types.string;
extraConfig = mkOption {
type = types.attrs;
default = {};
example =
{ commitlog_sync_batch_window_in_ms = 3;
};
description = ''
Run a script when repair starts. One can use it to send statsd events, email, etc.
It has same semantics as systemd ExecStopPost; So, if it fails, unit is consisdered
failed.
Extra options to be merged into cassandra.yaml as nix attribute set.
'';
};
fullRepairInterval = mkOption {
type = types.nullOr types.str;
default = "3w";
example = literalExample "null";
description = ''
Set the interval how often full repairs are run, i.e.
`nodetool repair --full` is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
for more information.
Set to `null` to disable full repairs.
'';
};
fullRepairOptions = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--partitioner-range" ];
description = ''
Options passed through to the full repair command.
'';
};
incrementalRepairInterval = mkOption {
type = types.nullOr types.str;
default = "3d";
example = literalExample "null";
description = ''
Set the interval how often incremental repairs are run, i.e.
`nodetool repair` is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
for more information.
Set to `null` to disable incremental repairs.
'';
};
incrementalRepairOptions = mkOption {
type = types.listOf types.string;
default = [];
example = [ "--partitioner-range" ];
description = ''
Options passed through to the incremental repair command.
'';
};
};
###### implementation
config = mkIf cfg.enable {
environment.etc."cassandra/cassandra-rackdc.properties" = {
source = cassandraRackFile;
};
environment.etc."cassandra/cassandra.yaml" = {
source = cassandraConfFile;
};
environment.etc."cassandra/log4j-server.properties" = {
source = cassandraLogFile;
};
environment.etc."cassandra/cassandra-env.sh" = {
text = ''
${builtins.readFile cfg.envFile}
${concatStringsSep "\n" cfg.extraParams}
'';
};
systemd.services.cassandra = {
description = "Cassandra Daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = cassandraEnvironment;
restartTriggers = [ cassandraConfFile cassandraLogFile cassandraRackFile ];
serviceConfig = {
User = cfg.user;
PermissionsStartOnly = true;
LimitAS = "infinity";
LimitNOFILE = "100000";
LimitNPROC = "32768";
LimitMEMLOCK = "infinity";
};
script = ''
${cassandraPackage}/bin/cassandra -f
'';
path = [
cfg.jre
cassandraPackage
pkgs.coreutils
assertions =
[ { assertion =
((isNull cfg.listenAddress)
|| (isNull cfg.listenInterface)
) && !((isNull cfg.listenAddress)
&& (isNull cfg.listenInterface)
);
message = "You have to set either listenAddress or listenInterface";
}
{ assertion =
((isNull cfg.rpcAddress)
|| (isNull cfg.rpcInterface)
) && !((isNull cfg.rpcAddress)
&& (isNull cfg.rpcInterface)
);
message = "You have to set either rpcAddress or rpcInterface";
}
];
preStart = ''
mkdir -m 0700 -p /etc/cassandra/triggers
mkdir -m 0700 -p /var/lib/cassandra /var/log/cassandra
chown ${cfg.user} /var/lib/cassandra /var/log/cassandra /etc/cassandra/triggers
'';
postStart = ''
sleep 2
while ! nodetool status >/dev/null 2>&1; do
sleep 2
done
nodetool status
'';
users = mkIf (cfg.user == defaultUser) {
extraUsers."${defaultUser}" =
{ group = cfg.group;
home = cfg.homeDir;
createHome = true;
uid = config.ids.uids.cassandra;
description = "Cassandra service user";
};
extraGroups."${defaultUser}".gid = config.ids.gids.cassandra;
};
environment.systemPackages = [ cassandraPackage ];
networking.firewall.allowedTCPPorts = [
7000
7001
9042
9160
];
users.users.cassandra =
if config.ids.uids ? "cassandra"
then { uid = config.ids.uids.cassandra; } // cassandraUser
else cassandraUser ;
boot.kernel.sysctl."vm.swappiness" = pkgs.lib.mkOptionDefault 0;
systemd.timers."cassandra-repair" = {
timerConfig = {
OnCalendar = "${toString cfg.repairStartAt}";
RandomizedDelaySec = cfg.repairRandomizedDelayInSec;
systemd.services.cassandra =
{ description = "Apache Cassandra service";
after = [ "network.target" ];
environment =
{ CASSANDRA_CONF = "${cassandraEtc}";
JVM_OPTS = builtins.concatStringsSep " " cfg.jvmOpts;
};
wantedBy = [ "multi-user.target" ];
serviceConfig =
{ User = cfg.user;
Group = cfg.group;
ExecStart = "${cfg.package}/bin/cassandra -f";
SuccessExitStatus = 143;
};
};
};
systemd.services."cassandra-repair" = {
description = "Cassandra repair daemon";
environment = cassandraEnvironment;
script = "${cassandraPackage}/bin/nodetool repair -pr";
postStop = mkIf (cfg.repairPostStop != null) cfg.repairPostStop;
postStart = mkIf (cfg.repairPostStart != null) cfg.repairPostStart;
serviceConfig = {
User = cfg.user;
systemd.services.cassandra-full-repair =
{ description = "Perform a full repair on this Cassandra node";
after = [ "cassandra.service" ];
requires = [ "cassandra.service" ];
serviceConfig =
{ User = cfg.user;
Group = cfg.group;
ExecStart =
lib.concatStringsSep " "
([ "${cfg.package}/bin/nodetool" "repair" "--full"
] ++ cfg.fullRepairOptions);
};
};
systemd.timers.cassandra-full-repair =
mkIf (!isNull cfg.fullRepairInterval) {
description = "Schedule full repairs on Cassandra";
wantedBy = [ "timers.target" ];
timerConfig =
{ OnBootSec = cfg.fullRepairInterval;
OnUnitActiveSec = cfg.fullRepairInterval;
Persistent = true;
};
};
systemd.services.cassandra-incremental-repair =
{ description = "Perform an incremental repair on this cassandra node.";
after = [ "cassandra.service" ];
requires = [ "cassandra.service" ];
serviceConfig =
{ User = cfg.user;
Group = cfg.group;
ExecStart =
lib.concatStringsSep " "
([ "${cfg.package}/bin/nodetool" "repair"
] ++ cfg.incrementalRepairOptions);
};
};
systemd.timers.cassandra-incremental-repair =
mkIf (!isNull cfg.incrementalRepairInterval) {
description = "Schedule incremental repairs on Cassandra";
wantedBy = [ "timers.target" ];
timerConfig =
{ OnBootSec = cfg.incrementalRepairInterval;
OnUnitActiveSec = cfg.incrementalRepairInterval;
Persistent = true;
};
};
};
};
}

View File

@ -320,9 +320,6 @@ in
};
config = mkIf cfg.enable {
meta.doc = ./foundationdb.xml;
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
environment.systemPackages = [ pkg ];
users.users = optionalAttrs (cfg.user == "foundationdb") (singleton
@ -413,4 +410,7 @@ in
'';
};
};
meta.doc = ./foundationdb.xml;
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
}

View File

@ -2,7 +2,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-foundationdb">
xml:id="module-services-foundationdb">
<title>FoundationDB</title>
@ -12,12 +12,10 @@
<para><emphasis>Maintainer:</emphasis> Austin Seipp</para>
<para><emphasis>Available version(s):</emphasis> 5.1.x</para>
<para><emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x</para>
<para>FoundationDB (or "FDB") is a distributed, open source, high performance,
transactional key-value store. It can store petabytes of data and deliver
exceptional performance while maintaining consistency and ACID semantics
(serializable transactions) over a large cluster.</para>
<para>FoundationDB (or "FDB") is an open source, distributed, transactional
key-value store.</para>
<section><title>Configuring and basic setup</title>
@ -26,12 +24,12 @@ exceptional performance while maintaining consistency and ACID semantics
<programlisting>
services.foundationdb.enable = true;
services.foundationdb.package = pkgs.foundationdb51; # FoundationDB 5.1.x
services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
</programlisting>
</para>
<para>The <option>services.foundationdb.package</option> option is required,
and must always be specified. Because FoundationDB network protocols and
and must always be specified. Due to the fact FoundationDB network protocols and
on-disk storage formats may change between (major) versions, and upgrades must
be explicitly handled by the user, you must always manually specify this
yourself so that the NixOS module will use the proper version. Note that minor,
@ -70,6 +68,40 @@ fdb>
</programlisting>
</para>
<para>You can also write programs using the available client libraries.
For example, the following Python program can be run in order to grab the
cluster status, as a quick example. (This example uses
<command>nix-shell</command> shebang support to automatically supply the
necessary Python modules).
<programlisting>
a@link> cat fdb-status.py
#! /usr/bin/env nix-shell
#! nix-shell -i python -p python pythonPackages.foundationdb52
import fdb
import json
def main():
fdb.api_version(520)
db = fdb.open()
@fdb.transactional
def get_status(tr):
return str(tr['\xff\xff/status/json'])
obj = json.loads(get_status(db))
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
if __name__ == "__main__":
main()
a@link> chmod +x fdb-status.py
a@link> ./fdb-status.py
FoundationDB available: True
a@link>
</programlisting>
</para>
<para>FoundationDB is run under the <command>foundationdb</command> user and
group by default, but this may be changed in the NixOS configuration. The
systemd unit <command>foundationdb.service</command> controls the
@ -295,7 +327,6 @@ only undergone fairly basic testing of all the available functionality.</para>
individual <command>fdbserver</command> processes. Currently, all server
processes inherit all the global <command>fdbmonitor</command> settings.
</para></listitem>
<listitem><para>Python bindings are not currently installed.</para></listitem>
<listitem><para>Ruby bindings are not currently installed.</para></listitem>
<listitem><para>Go bindings are not currently installed.</para></listitem>
</itemizedlist>
@ -306,8 +337,9 @@ only undergone fairly basic testing of all the available functionality.</para>
<para>NixOS's FoundationDB module allows you to configure all of the most
relevant configuration options for <command>fdbmonitor</command>, matching it
quite closely. For a complete list of all options, check <command>man
configuration.nix</command>.</para>
quite closely. A complete list of options for the FoundationDB module may be
found <link linkend="opt-services.foundationdb.enable">here</link>. You should
also read the FoundationDB documentation as well.</para>
</section>

View File

@ -32,15 +32,21 @@ with lib;
environment.systemPackages = [ pkgs.accountsservice ];
# Accounts daemon looks for dbus interfaces in $XDG_DATA_DIRS/accountsservice
environment.pathsToLink = [ "/share/accountsservice" ];
services.dbus.packages = [ pkgs.accountsservice ];
systemd.packages = [ pkgs.accountsservice ];
systemd.services.accounts-daemon= {
systemd.services.accounts-daemon = {
wantedBy = [ "graphical.target" ];
} // (mkIf (!config.users.mutableUsers) {
# Accounts daemon looks for dbus interfaces in $XDG_DATA_DIRS/accountsservice
environment.XDG_DATA_DIRS = "${config.system.path}/share";
} // (optionalAttrs (!config.users.mutableUsers) {
environment.NIXOS_USERS_PURE = "true";
});
};

View File

@ -4,6 +4,10 @@
with lib;
let
# the demo agent isn't built by default, but we need it here
package = pkgs.geoclue2.override { withDemoAgent = config.services.geoclue2.enableDemoAgent; };
in
{
###### interface
@ -21,21 +25,42 @@ with lib;
'';
};
enableDemoAgent = mkOption {
type = types.bool;
default = true;
description = ''
Whether to use the GeoClue demo agent. This should be
overridden by desktop environments that provide their own
agent.
'';
};
};
};
###### implementation
config = mkIf config.services.geoclue2.enable {
environment.systemPackages = [ pkgs.geoclue2 ];
environment.systemPackages = [ package ];
services.dbus.packages = [ pkgs.geoclue2 ];
systemd.packages = [ pkgs.geoclue2 ];
services.dbus.packages = [ package ];
systemd.packages = [ package ];
# this needs to run as a user service, since it's associated with the
# user who is making the requests
systemd.user.services = mkIf config.services.geoclue2.enableDemoAgent {
"geoclue-agent" = {
description = "Geoclue agent";
script = "${package}/libexec/geoclue-2.0/demos/agent";
# this should really be `partOf = [ "geoclue.service" ]`, but
# we can't be part of a system service, and the agent should
# be okay with the main service coming and going
wantedBy = [ "default.target" ];
};
};
};
}

View File

@ -0,0 +1,26 @@
# Zeitgeist
{ config, lib, pkgs, ... }:
with lib;
{
###### interface
options = {
services.zeitgeist = {
enable = mkEnableOption "zeitgeist";
};
};
###### implementation
config = mkIf config.services.zeitgeist.enable {
environment.systemPackages = [ pkgs.zeitgeist ];
services.dbus.packages = [ pkgs.zeitgeist ];
systemd.packages = [ pkgs.zeitgeist ];
};
}

View File

@ -43,6 +43,12 @@ in {
defaultText = "pkgs.haskellPackages";
};
home = mkOption {
type = types.str;
description = "Url for hoogle logo";
default = "https://hoogle.haskell.org";
};
};
config = mkIf cfg.enable {
@ -53,7 +59,7 @@ in {
serviceConfig = {
Restart = "always";
ExecStart = ''${hoogleEnv}/bin/hoogle server --local -p ${toString cfg.port}'';
ExecStart = ''${hoogleEnv}/bin/hoogle server --local --port ${toString cfg.port} --home ${cfg.home}'';
User = "nobody";
Group = "nogroup";

View File

@ -10,8 +10,8 @@ in {
package = mkOption {
type = types.package;
default = pkgs.libinfinity.override { daemon = true; };
defaultText = "pkgs.libinfinity.override { daemon = true; }";
default = pkgs.libinfinity;
defaultText = "pkgs.libinfinity";
description = ''
Package providing infinoted
'';
@ -119,7 +119,7 @@ in {
users.groups = optional (cfg.group == "infinoted")
{ name = "infinoted";
};
systemd.services.infinoted =
{ description = "Gobby Dedicated Server";
@ -129,7 +129,7 @@ in {
serviceConfig = {
Type = "simple";
Restart = "always";
ExecStart = "${cfg.package}/bin/infinoted-${versions.majorMinor cfg.package.version} --config-file=/var/lib/infinoted/infinoted.conf";
ExecStart = "${cfg.package.infinoted} --config-file=/var/lib/infinoted/infinoted.conf";
User = cfg.user;
Group = cfg.group;
PermissionsStartOnly = true;

View File

@ -18,6 +18,16 @@ let
(boolFlag "secure" cfg.secure)
(boolFlag "noupnp" cfg.noUPnP)
];
stopScript = pkgs.writeScript "terraria-stop" ''
#!${pkgs.runtimeShell}
if ! [ -d "/proc/$1" ]; then
exit 0
fi
${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock send-keys Enter exit Enter
${getBin pkgs.coreutils}/bin/tail --pid="$1" -f /dev/null
'';
in
{
options = {
@ -124,10 +134,10 @@ in
serviceConfig = {
User = "terraria";
Type = "oneshot";
RemainAfterExit = true;
Type = "forking";
GuessMainPID = true;
ExecStart = "${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock new -d ${pkgs.terraria-server}/bin/TerrariaServer ${concatStringsSep " " flags}";
ExecStop = "${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock send-keys Enter \"exit\" Enter";
ExecStop = "${stopScript} $MAINPID";
};
postStart = ''

View File

@ -71,6 +71,13 @@ in {
BlacklistPlugins=${lib.concatStringsSep ";" cfg.blacklistPlugins}
'';
};
"fwupd/uefi.conf" = {
source = pkgs.writeText "uefi.conf" ''
[uefi]
OverrideESPMountPoint=${config.boot.loader.efi.efiSysMountPoint}
'';
};
} // originalEtc // extraTrustedKeys;
services.dbus.packages = [ pkgs.fwupd ];

View File

@ -6,16 +6,30 @@ let
cfg = config.services.thermald;
in {
###### interface
options = {
services.thermald = {
options = {
services.thermald = {
enable = mkOption {
default = false;
description = ''
Whether to enable thermald, the temperature management daemon.
'';
};
};
};
'';
};
debug = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable debug logging.
'';
};
configFile = mkOption {
type = types.nullOr types.path;
default = null;
description = "the thermald manual configuration file.";
};
};
};
###### implementation
config = mkIf cfg.enable {
@ -24,7 +38,15 @@ in {
systemd.services.thermald = {
description = "Thermal Daemon Service";
wantedBy = [ "multi-user.target" ];
script = "exec ${pkgs.thermald}/sbin/thermald --no-daemon --dbus-enable";
serviceConfig = {
ExecStart = ''
${pkgs.thermald}/sbin/thermald \
--no-daemon \
${optionalString cfg.debug "--loglevel=debug"} \
${optionalString (cfg.configFile != null) "--config-file ${cfg.configFile}"} \
--dbus-enable
'';
};
};
};
}

View File

@ -0,0 +1,134 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.undervolt;
in {
options.services.undervolt = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to undervolt intel cpus.
'';
};
verbose = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable verbose logging.
'';
};
package = mkOption {
type = types.package;
default = pkgs.undervolt;
defaultText = "pkgs.undervolt";
description = ''
undervolt derivation to use.
'';
};
coreOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset the CPU cores by. Accepts a floating point number.
'';
};
gpuOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset the GPU by. Accepts a floating point number.
'';
};
uncoreOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset uncore by. Accepts a floating point number.
'';
};
analogioOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset analogio by. Accepts a floating point number.
'';
};
temp = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target. Accepts a floating point number.
'';
};
tempAc = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target on AC power. Accepts a floating point number.
'';
};
tempBat = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target on battery power. Accepts a floating point number.
'';
};
};
config = mkIf cfg.enable {
boot.kernelModules = [ "msr" ];
environment.systemPackages = [ cfg.package ];
systemd.services.undervolt = {
path = [ pkgs.undervolt ];
description = "Intel Undervolting Service";
serviceConfig = {
Type = "oneshot";
Restart = "no";
# `core` and `cache` are both intentionally set to `cfg.coreOffset` as according to the undervolt docs:
#
# Core or Cache offsets have no effect. It is not possible to set different offsets for
# CPU Core and Cache. The CPU will take the smaller of the two offsets, and apply that to
# both CPU and Cache. A warning message will be displayed if you attempt to set different offsets.
ExecStart = ''
${pkgs.undervolt}/bin/undervolt \
${optionalString cfg.verbose "--verbose"} \
${optionalString (cfg.coreOffset != null) "--core ${cfg.coreOffset}"} \
${optionalString (cfg.coreOffset != null) "--cache ${cfg.coreOffset}"} \
${optionalString (cfg.gpuOffset != null) "--gpu ${cfg.gpuOffset}"} \
${optionalString (cfg.uncoreOffset != null) "--uncore ${cfg.uncoreOffset}"} \
${optionalString (cfg.analogioOffset != null) "--analogio ${cfg.analogioOffset}"} \
${optionalString (cfg.temp != null) "--temp ${cfg.temp}"} \
${optionalString (cfg.tempAc != null) "--temp-ac ${cfg.tempAc}"} \
${optionalString (cfg.tempBat != null) "--temp-bat ${cfg.tempBat}"}
'';
};
};
systemd.timers.undervolt = {
description = "Undervolt timer to ensure voltage settings are always applied";
partOf = [ "undervolt.service" ];
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnBootSec = "2min";
OnUnitActiveSec = "30";
};
};
};
}

View File

@ -85,9 +85,11 @@ in {
after = [ "multi-user.target" ]; # makes sure hostname etc is set
serviceConfig = {
Type = "notify";
PIDFile = pidFile;
StandardOutput = "null";
Restart = "on-failure";
ExecStart = "${cfg.package}/sbin/syslog-ng ${concatStringsSep " " syslogngOptions}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
};
};

View File

@ -47,7 +47,7 @@ in
###### implementation
config = mkIf cfg.enable {
services.dysnomia.enable = true;
dysnomia.enable = true;
environment.systemPackages = [ pkgs.disnix ] ++ optional cfg.useWebServiceInterface pkgs.DisnixWebService;

View File

@ -5,6 +5,43 @@ with lib;
let
cfg = config.services.dockerRegistry;
blobCache = if cfg.enableRedisCache
then "redis"
else "inmemory";
registryConfig = {
version = "0.1";
log.fields.service = "registry";
storage = {
cache.blobdescriptor = blobCache;
filesystem.rootdirectory = cfg.storagePath;
delete.enabled = cfg.enableDelete;
};
http = {
addr = ":${builtins.toString cfg.port}";
headers.X-Content-Type-Options = ["nosniff"];
};
health.storagedriver = {
enabled = true;
interval = "10s";
threshold = 3;
};
};
registryConfig.redis = mkIf cfg.enableRedisCache {
addr = "${cfg.redisUrl}";
password = "${cfg.redisPassword}";
db = 0;
dialtimeout = "10ms";
readtimeout = "10ms";
writetimeout = "10ms";
pool = {
maxidle = 16;
maxactive = 64;
idletimeout = "300s";
};
};
configFile = pkgs.writeText "docker-registry-config.yml" (builtins.toJSON (recursiveUpdate registryConfig cfg.extraConfig));
in {

View File

@ -3,7 +3,7 @@
with lib;
let
cfg = config.services.dysnomia;
cfg = config.dysnomia;
printProperties = properties:
concatMapStrings (propertyName:
@ -69,7 +69,7 @@ let
in
{
options = {
services.dysnomia = {
dysnomia = {
enable = mkOption {
type = types.bool;
@ -142,7 +142,7 @@ in
environment.systemPackages = [ cfg.package ];
services.dysnomia.package = pkgs.dysnomia.override (origArgs: {
dysnomia.package = pkgs.dysnomia.override (origArgs: {
enableApacheWebApplication = config.services.httpd.enable;
enableAxis2WebService = config.services.tomcat.axis2.enable;
enableEjabberdDump = config.services.ejabberd.enable;
@ -153,7 +153,7 @@ in
enableMongoDatabase = config.services.mongodb.enable;
});
services.dysnomia.properties = {
dysnomia.properties = {
hostname = config.networking.hostName;
inherit (config.nixpkgs.localSystem) system;
@ -171,7 +171,7 @@ in
}}");
};
services.dysnomia.containers = lib.recursiveUpdate ({
dysnomia.containers = lib.recursiveUpdate ({
process = {};
wrapper = {};
}

View File

@ -560,6 +560,7 @@ in {
mkdir -p ${cfg.statePath}/tmp/sockets
mkdir -p ${cfg.statePath}/shell
mkdir -p ${cfg.statePath}/db
mkdir -p ${cfg.statePath}/uploads
rm -rf ${cfg.statePath}/config ${cfg.statePath}/shell/hooks
mkdir -p ${cfg.statePath}/config
@ -570,6 +571,7 @@ in {
mkdir -p ${cfg.statePath}/log
ln -sf ${cfg.statePath}/log /run/gitlab/log
ln -sf ${cfg.statePath}/tmp /run/gitlab/tmp
ln -sf ${cfg.statePath}/uploads /run/gitlab/uploads
ln -sf $GITLAB_SHELL_CONFIG_PATH /run/gitlab/shell-config.yml
chown -R ${cfg.user}:${cfg.group} /run/gitlab
@ -584,7 +586,9 @@ in {
ln -sf ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
''}
ln -sf ${cfg.statePath}/config /run/gitlab/config
rm ${cfg.statePath}/lib
if [ -e ${cfg.statePath}/lib ]; then
rm ${cfg.statePath}/lib
fi
ln -sf ${pkgs.gitlab}/share/gitlab/lib ${cfg.statePath}/lib
cp ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
@ -608,10 +612,11 @@ in {
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} ${config.services.postgresql.package}/bin/createdb --owner ${cfg.databaseUsername} ${cfg.databaseName}
touch "${cfg.statePath}/db-created"
fi
# enable required pg_trgm extension for gitlab
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} psql ${cfg.databaseName} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"
fi
# enable required pg_trgm extension for gitlab
${pkgs.sudo}/bin/sudo -u ${pgSuperUser} psql ${cfg.databaseName} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"
# Always do the db migrations just to be sure the database is up-to-date
${gitlab-rake}/bin/gitlab-rake db:migrate RAILS_ENV=production

View File

@ -88,7 +88,7 @@ in
};
maxJobs = mkOption {
type = types.int;
type = types.either types.int (types.enum ["auto"]);
default = 1;
example = 64;
description = ''
@ -127,16 +127,16 @@ in
useSandbox = mkOption {
type = types.either types.bool (types.enum ["relaxed"]);
default = false;
default = true;
description = "
If set, Nix will perform builds in a sandboxed environment that it
will set up automatically for each build. This prevents impurities
in builds by disallowing access to dependencies outside of the Nix
store by using network and mount namespaces in a chroot environment.
This isn't enabled by default for possible performance impacts due to
the initial setup time of a sandbox for each build. It doesn't affect
derivation hashes, so changing this option will not trigger a rebuild
of packages.
This is enabled by default even though it has a possible performance
impact due to the initial setup time of a sandbox for each build. It
doesn't affect derivation hashes, so changing this option will not
trigger a rebuild of packages.
";
};

View File

@ -1,121 +1,124 @@
{ config, lib, pkgs, ... }:
# TODO: support non-postgresql
with lib;
let
cfg = config.services.redmine;
ruby = pkgs.ruby;
bundle = "${pkgs.redmine}/share/redmine/bin/bundle";
databaseYml = ''
databaseYml = pkgs.writeText "database.yml" ''
production:
adapter: postgresql
database: ${cfg.databaseName}
host: ${cfg.databaseHost}
password: ${cfg.databasePassword}
username: ${cfg.databaseUsername}
encoding: utf8
adapter: ${cfg.database.type}
database: ${cfg.database.name}
host: ${cfg.database.host}
port: ${toString cfg.database.port}
username: ${cfg.database.user}
password: #dbpass#
'';
configurationYml = ''
configurationYml = pkgs.writeText "configuration.yml" ''
default:
# Absolute path to the directory where attachments are stored.
# The default is the 'files' directory in your Redmine instance.
# Your Redmine instance needs to have write permission on this
# directory.
# Examples:
# attachments_storage_path: /var/redmine/files
# attachments_storage_path: D:/redmine/files
attachments_storage_path: ${cfg.stateDir}/files
scm_subversion_command: ${pkgs.subversion}/bin/svn
scm_mercurial_command: ${pkgs.mercurial}/bin/hg
scm_git_command: ${pkgs.gitAndTools.git}/bin/git
scm_cvs_command: ${pkgs.cvs}/bin/cvs
scm_bazaar_command: ${pkgs.bazaar}/bin/bzr
scm_darcs_command: ${pkgs.darcs}/bin/darcs
# Absolute path to the SCM commands errors (stderr) log file.
# The default is to log in the 'log' directory of your Redmine instance.
# Example:
# scm_stderr_log_file: /var/log/redmine_scm_stderr.log
scm_stderr_log_file: ${cfg.stateDir}/redmine_scm_stderr.log
${cfg.extraConfig}
${cfg.extraConfig}
'';
unpackTheme = unpack "theme";
unpackPlugin = unpack "plugin";
unpack = id: (name: source:
pkgs.stdenv.mkDerivation {
name = "redmine-${id}-${name}";
buildInputs = [ pkgs.unzip ];
buildCommand = ''
mkdir -p $out
cd $out
unpackFile ${source}
'';
});
in {
in
{
options = {
services.redmine = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable the redmine service.
'';
description = "Enable the Redmine service.";
};
user = mkOption {
type = types.str;
default = "redmine";
description = "User under which Redmine is ran.";
};
group = mkOption {
type = types.str;
default = "redmine";
description = "Group under which Redmine is ran.";
};
stateDir = mkOption {
type = types.str;
default = "/var/redmine";
description = "The state directory, logs and plugins are stored here";
default = "/var/lib/redmine";
description = "The state directory, logs and plugins are stored here.";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Extra configuration in configuration.yml";
description = ''
Extra configuration in configuration.yml.
See https://guides.rubyonrails.org/action_mailer_basics.html#action-mailer-configuration
'';
};
themes = mkOption {
type = types.attrsOf types.path;
default = {};
description = "Set of themes";
};
database = {
type = mkOption {
type = types.enum [ "mysql2" "postgresql" ];
example = "postgresql";
default = "mysql2";
description = "Database engine to use.";
};
plugins = mkOption {
type = types.attrsOf types.path;
default = {};
description = "Set of plugins";
};
host = mkOption {
type = types.str;
default = "127.0.0.1";
description = "Database host address.";
};
#databaseType = mkOption {
# type = types.str;
# default = "postgresql";
# description = "Type of database";
#};
port = mkOption {
type = types.int;
default = 3306;
description = "Database host port.";
};
databaseHost = mkOption {
type = types.str;
default = "127.0.0.1";
description = "Database hostname";
};
name = mkOption {
type = types.str;
default = "redmine";
description = "Database name.";
};
databasePassword = mkOption {
type = types.str;
default = "";
description = "Database user password";
};
user = mkOption {
type = types.str;
default = "redmine";
description = "Database user.";
};
databaseName = mkOption {
type = types.str;
default = "redmine";
description = "Database name";
};
password = mkOption {
type = types.str;
default = "";
description = ''
The password corresponding to <option>database.user</option>.
Warning: this is stored in cleartext in the Nix store!
Use <option>database.passwordFile</option> instead.
'';
};
databaseUsername = mkOption {
type = types.str;
default = "redmine";
description = "Database user";
passwordFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/run/keys/redmine-dbpassword";
description = ''
A file containing the password corresponding to
<option>database.user</option>.
'';
};
};
};
};
@ -123,99 +126,106 @@ in {
config = mkIf cfg.enable {
assertions = [
{ assertion = cfg.databasePassword != "";
message = "services.redmine.databasePassword must be set";
{ assertion = cfg.database.passwordFile != null || cfg.database.password != "";
message = "either services.redmine.database.passwordFile or services.redmine.database.password must be set";
}
];
users.users = [
{ name = "redmine";
group = "redmine";
uid = config.ids.uids.redmine;
} ];
users.groups = [
{ name = "redmine";
gid = config.ids.gids.redmine;
} ];
environment.systemPackages = [ pkgs.redmine ];
systemd.services.redmine = {
after = [ "network.target" "postgresql.service" ];
after = [ "network.target" (if cfg.database.type == "mysql2" then "mysql.service" else "postgresql.service") ];
wantedBy = [ "multi-user.target" ];
environment.RAILS_ENV = "production";
environment.RAILS_ETC = "${cfg.stateDir}/config";
environment.RAILS_LOG = "${cfg.stateDir}/log";
environment.RAILS_VAR = "${cfg.stateDir}/var";
environment.RAILS_CACHE = "${cfg.stateDir}/cache";
environment.RAILS_PLUGINS = "${cfg.stateDir}/plugins";
environment.RAILS_PUBLIC = "${cfg.stateDir}/public";
environment.RAILS_TMP = "${cfg.stateDir}/tmp";
environment.SCHEMA = "${cfg.stateDir}/cache/schema.db";
environment.HOME = "${pkgs.redmine}/share/redmine";
environment.RAILS_ENV = "production";
environment.RAILS_CACHE = "${cfg.stateDir}/cache";
environment.REDMINE_LANG = "en";
environment.GEM_HOME = "${pkgs.redmine}/share/redmine/vendor/bundle/ruby/1.9.1";
environment.GEM_PATH = "${pkgs.bundler}/${pkgs.bundler.ruby.gemPath}";
environment.SCHEMA = "${cfg.stateDir}/cache/schema.db";
path = with pkgs; [
imagemagickBig
subversion
mercurial
cvs
config.services.postgresql.package
bazaar
cvs
darcs
gitAndTools.git
# once we build binaries for darc enable it
#darcs
mercurial
subversion
];
preStart = ''
# TODO: use env vars
for i in plugins public/plugin_assets db files log config cache var/files tmp; do
# start with a fresh config directory every time
rm -rf ${cfg.stateDir}/config
cp -r ${pkgs.redmine}/share/redmine/config.dist ${cfg.stateDir}/config
# create the basic state directory layout pkgs.redmine expects
mkdir -p /run/redmine
for i in config files log plugins tmp; do
mkdir -p ${cfg.stateDir}/$i
ln -fs ${cfg.stateDir}/$i /run/redmine/$i
done
chown -R redmine:redmine ${cfg.stateDir}
chmod -R 755 ${cfg.stateDir}
# ensure cache directory exists for db:migrate command
mkdir -p ${cfg.stateDir}/cache
rm -rf ${cfg.stateDir}/public/*
cp -R ${pkgs.redmine}/share/redmine/public/* ${cfg.stateDir}/public/
for theme in ${concatStringsSep " " (mapAttrsToList unpackTheme cfg.themes)}; do
ln -fs $theme/* ${cfg.stateDir}/public/themes/
done
# link in the application configuration
ln -fs ${configurationYml} ${cfg.stateDir}/config/configuration.yml
rm -rf ${cfg.stateDir}/plugins/*
for plugin in ${concatStringsSep " " (mapAttrsToList unpackPlugin cfg.plugins)}; do
ln -fs $plugin/* ${cfg.stateDir}/plugins/''${plugin##*-redmine-plugin-}
done
chmod -R ug+rwX,o-rwx+x ${cfg.stateDir}/
ln -fs ${pkgs.writeText "database.yml" databaseYml} ${cfg.stateDir}/config/database.yml
ln -fs ${pkgs.writeText "configuration.yml" configurationYml} ${cfg.stateDir}/config/configuration.yml
# handle database.passwordFile
DBPASS=$(head -n1 ${cfg.database.passwordFile})
cp -f ${databaseYml} ${cfg.stateDir}/config/database.yml
sed -e "s,#dbpass#,$DBPASS,g" -i ${cfg.stateDir}/config/database.yml
chmod 440 ${cfg.stateDir}/config/database.yml
if [ "${cfg.databaseHost}" = "127.0.0.1" ]; then
if ! test -e "${cfg.stateDir}/db-created"; then
psql postgres -c "CREATE ROLE redmine WITH LOGIN NOCREATEDB NOCREATEROLE ENCRYPTED PASSWORD '${cfg.databasePassword}'"
${config.services.postgresql.package}/bin/createdb --owner redmine redmine || true
touch "${cfg.stateDir}/db-created"
fi
# generate a secret token if required
if ! test -e "${cfg.stateDir}/config/initializers/secret_token.rb"; then
${bundle} exec rake generate_secret_token
chmod 440 ${cfg.stateDir}/config/initializers/secret_token.rb
fi
cd ${pkgs.redmine}/share/redmine/
${ruby}/bin/rake db:migrate
${ruby}/bin/rake redmine:plugins:migrate
${ruby}/bin/rake redmine:load_default_data
${ruby}/bin/rake generate_secret_token
# ensure everything is owned by ${cfg.user}
chown -R ${cfg.user}:${cfg.group} ${cfg.stateDir}
${bundle} exec rake db:migrate
${bundle} exec rake redmine:load_default_data
'';
serviceConfig = {
PermissionsStartOnly = true; # preStart must be run as root
Type = "simple";
User = "redmine";
Group = "redmine";
User = cfg.user;
Group = cfg.group;
TimeoutSec = "300";
WorkingDirectory = "${pkgs.redmine}/share/redmine";
ExecStart="${ruby}/bin/ruby ${pkgs.redmine}/share/redmine/script/rails server webrick -e production -P ${cfg.stateDir}/redmine.pid";
ExecStart="${bundle} exec rails server webrick -e production -P ${cfg.stateDir}/redmine.pid";
};
};
users.extraUsers = optionalAttrs (cfg.user == "redmine") (singleton
{ name = "redmine";
group = cfg.group;
home = cfg.stateDir;
createHome = true;
uid = config.ids.uids.redmine;
});
users.extraGroups = optionalAttrs (cfg.group == "redmine") (singleton
{ name = "redmine";
gid = config.ids.gids.redmine;
});
warnings = optional (cfg.database.password != "")
''config.services.redmine.database.password will be stored as plaintext
in the Nix store. Use database.passwordFile instead.'';
# Create database passwordFile default when password is configured.
services.redmine.database.passwordFile =
(mkDefault (toString (pkgs.writeTextFile {
name = "redmine-database-password";
text = cfg.database.password;
})));
};
}

View File

@ -83,20 +83,20 @@ in
config = mkMerge [
(mkIf cfgC.enable {
systemd.services."synergy-client" = {
after = [ "network.target" ];
systemd.user.services."synergy-client" = {
after = [ "network.target" "graphical-session.target" ];
description = "Synergy client";
wantedBy = optional cfgC.autoStart "multi-user.target";
wantedBy = optional cfgC.autoStart "graphical-session.target";
path = [ pkgs.synergy ];
serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergyc -f ${optionalString (cfgC.screenName != "") "-n ${cfgC.screenName}"} ${cfgC.serverAddress}'';
serviceConfig.Restart = "on-failure";
};
})
(mkIf cfgS.enable {
systemd.services."synergy-server" = {
after = [ "network.target" ];
systemd.user.services."synergy-server" = {
after = [ "network.target" "graphical-session.target" ];
description = "Synergy server";
wantedBy = optional cfgS.autoStart "multi-user.target";
wantedBy = optional cfgS.autoStart "graphical-session.target";
path = [ pkgs.synergy ];
serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergys -c ${cfgS.configFile} -f ${optionalString (cfgS.address != "") "-a ${cfgS.address}"} ${optionalString (cfgS.screenName != "") "-n ${cfgS.screenName}" }'';
serviceConfig.Restart = "on-failure";

View File

@ -0,0 +1,236 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.datadog-agent;
ddConf = {
dd_url = "https://app.datadoghq.com";
skip_ssl_validation = "no";
api_key = "";
confd_path = "/etc/datadog-agent/conf.d";
additional_checksd = "/etc/datadog-agent/checks.d";
use_dogstatsd = "yes";
}
// optionalAttrs (cfg.logLevel != null) { log_level = cfg.logLevel; }
// optionalAttrs (cfg.hostname != null) { inherit (cfg) hostname; }
// optionalAttrs (cfg.tags != null ) { tags = concatStringsSep ", " cfg.tags; }
// cfg.extraConfig;
# Generate Datadog configuration files for each configured checks.
# This works because check configurations have predictable paths,
# and because JSON is a valid subset of YAML.
makeCheckConfigs = entries: mapAttrsToList (name: conf: {
source = pkgs.writeText "${name}-check-conf.yaml" (builtins.toJSON conf);
target = "datadog-agent/conf.d/${name}.d/conf.yaml";
}) entries;
defaultChecks = {
disk = cfg.diskCheck;
network = cfg.networkCheck;
};
# Assemble all check configurations and the top-level agent
# configuration.
etcfiles = with pkgs; with builtins; [{
source = writeText "datadog.yaml" (toJSON ddConf);
target = "datadog-agent/datadog.yaml";
}] ++ makeCheckConfigs (cfg.checks // defaultChecks);
# Apply the configured extraIntegrations to the provided agent
# package. See the documentation of `dd-agent/integrations-core.nix`
# for detailed information on this.
datadogPkg = cfg.package.overrideAttrs(_: {
python = (pkgs.datadog-integrations-core cfg.extraIntegrations).python;
});
in {
options.services.datadog-agent = {
enable = mkOption {
description = ''
Whether to enable the datadog-agent v6 monitoring service
'';
default = false;
type = types.bool;
};
package = mkOption {
default = pkgs.datadog-agent;
defaultText = "pkgs.datadog-agent";
description = ''
Which DataDog v6 agent package to use. Note that the provided
package is expected to have an overridable `python`-attribute
which configures the Python environment with the Datadog
checks.
'';
type = types.package;
};
apiKeyFile = mkOption {
description = ''
Path to a file containing the Datadog API key to associate the
agent with your account.
'';
example = "/run/keys/datadog_api_key";
type = types.path;
};
tags = mkOption {
description = "The tags to mark this Datadog agent";
example = [ "test" "service" ];
default = null;
type = types.nullOr (types.listOf types.str);
};
hostname = mkOption {
description = "The hostname to show in the Datadog dashboard (optional)";
default = null;
example = "mymachine.mydomain";
type = types.uniq (types.nullOr types.string);
};
logLevel = mkOption {
description = "Logging verbosity.";
default = null;
type = types.nullOr (types.enum ["DEBUG" "INFO" "WARN" "ERROR"]);
};
extraIntegrations = mkOption {
default = {};
type = types.attrs;
description = ''
Extra integrations from the Datadog core-integrations
repository that should be built and included.
By default the included integrations are disk, mongo, network,
nginx and postgres.
To include additional integrations the name of the derivation
and a function to filter its dependencies from the Python
package set must be provided.
'';
example = {
ntp = (pythonPackages: [ pythonPackages.ntplib ]);
};
};
extraConfig = mkOption {
default = {};
type = types.attrs;
description = ''
Extra configuration options that will be merged into the
main config file <filename>datadog.yaml</filename>.
'';
};
checks = mkOption {
description = ''
Configuration for all Datadog checks. Keys of this attribute
set will be used as the name of the check to create the
appropriate configuration in `conf.d/$check.d/conf.yaml`.
The configuration is converted into JSON from the plain Nix
language configuration, meaning that you should write
configuration adhering to Datadog's documentation - but in Nix
language.
Refer to the implementation of this module (specifically the
definition of `defaultChecks`) for an example.
Note: The 'disk' and 'network' check are configured in
separate options because they exist by default. Attempting to
override their configuration here will have no effect.
'';
example = {
http_check = {
init_config = null; # sic!
instances = [
{
name = "some-service";
url = "http://localhost:1337/healthz";
tags = [ "some-service" ];
}
];
};
};
default = {};
# sic! The structure of the values is up to the check, so we can
# not usefully constrain the type further.
type = with types; attrsOf attrs;
};
diskCheck = mkOption {
description = "Disk check config";
type = types.attrs;
default = {
init_config = {};
instances = [ { use-mount = "no"; } ];
};
};
networkCheck = mkOption {
description = "Network check config";
type = types.attrs;
default = {
init_config = {};
# Network check only supports one configured instance
instances = [ { collect_connection_state = false;
excluded_interfaces = [ "lo" "lo0" ]; } ];
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ datadogPkg pkgs.sysstat pkgs.procps ];
users.extraUsers.datadog = {
description = "Datadog Agent User";
uid = config.ids.uids.datadog;
group = "datadog";
home = "/var/log/datadog/";
createHome = true;
};
users.extraGroups.datadog.gid = config.ids.gids.datadog;
systemd.services = let
makeService = attrs: recursiveUpdate {
path = [ datadogPkg pkgs.python pkgs.sysstat pkgs.procps ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
PrivateTmp = true;
};
restartTriggers = [ datadogPkg ] ++ map (etc: etc.source) etcfiles;
} attrs;
in {
datadog-agent = makeService {
description = "Datadog agent monitor";
preStart = ''
chown -R datadog: /etc/datadog-agent
rm -f /etc/datadog-agent/auth_token
'';
script = ''
export DD_API_KEY=$(head -n 1 ${cfg.apiKeyFile})
exec ${datadogPkg}/bin/agent start -c /etc/datadog-agent/datadog.yaml
'';
serviceConfig.PermissionsStartOnly = true;
};
dd-jmxfetch = lib.mkIf (lib.hasAttr "jmx" cfg.checks) (makeService {
description = "Datadog JMX Fetcher";
path = [ datadogPkg pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
serviceConfig.ExecStart = "${datadogPkg}/bin/dd-jmxfetch";
});
};
environment.etc = etcfiles;
};
}

View File

@ -114,13 +114,22 @@ let
in {
options.services.dd-agent = {
enable = mkOption {
description = "Whether to enable the dd-agent montioring service";
description = ''
Whether to enable the dd-agent v5 monitoring service.
For datadog-agent v6, see <option>services.datadog-agent.enable</option>.
'';
default = false;
type = types.bool;
};
api_key = mkOption {
description = "The Datadog API key to associate the agent with your account";
description = ''
The Datadog API key to associate the agent with your account.
Warning: this key is stored in cleartext within the world-readable
Nix store! Consider using the new v6
<option>services.datadog-agent</option> module instead.
'';
example = "ae0aa6a8f08efa988ba0a17578f009ab";
type = types.str;
};
@ -188,48 +197,41 @@ in {
users.groups.datadog.gid = config.ids.gids.datadog;
systemd.services.dd-agent = {
description = "Datadog agent monitor";
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps pkgs.gohai ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dd-agent foreground";
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
systemd.services = let
makeService = attrs: recursiveUpdate {
path = [ pkgs.dd-agent pkgs.python pkgs.sysstat pkgs.procps pkgs.gohai ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
PrivateTmp = true;
};
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
} attrs;
in {
dd-agent = makeService {
description = "Datadog agent monitor";
serviceConfig.ExecStart = "${pkgs.dd-agent}/bin/dd-agent foreground";
};
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
};
systemd.services.dogstatsd = {
description = "Datadog statsd";
path = [ pkgs."dd-agent" pkgs.python pkgs.procps ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dogstatsd start";
User = "datadog";
Group = "datadog";
Type = "forking";
PIDFile = "/tmp/dogstatsd.pid";
Restart = "always";
RestartSec = 2;
dogstatsd = makeService {
description = "Datadog statsd";
environment.TMPDIR = "/run/dogstatsd";
serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dogstatsd start";
Type = "forking";
PIDFile = "/run/dogstatsd/dogstatsd.pid";
RuntimeDirectory = "dogstatsd";
};
};
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
};
systemd.services.dd-jmxfetch = lib.mkIf (cfg.jmxConfig != null) {
description = "Datadog JMX Fetcher";
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dd-jmxfetch";
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
dd-jmxfetch = lib.mkIf (cfg.jmxConfig != null) {
description = "Datadog JMX Fetcher";
path = [ pkgs.dd-agent pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
serviceConfig.ExecStart = "${pkgs.dd-agent}/bin/dd-jmxfetch";
};
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig ];
};
environment.etc = etcfiles;

View File

@ -57,12 +57,6 @@ let
--nodaemon --syslog --prefix=${name} --pidfile /run/${name}/${name}.pid ${name}
'';
mkPidFileDir = name: ''
mkdir -p /run/${name}
chmod 0700 /run/${name}
chown -R graphite:graphite /run/${name}
'';
carbonEnv = {
PYTHONPATH = let
cenv = pkgs.python.buildEnv.override {
@ -136,7 +130,7 @@ in {
finders = mkOption {
description = "List of finder plugins to load.";
default = [];
example = literalExample "[ pkgs.python27Packages.graphite_influxdb ]";
example = literalExample "[ pkgs.python27Packages.influxgraph ]";
type = types.listOf types.package;
};
@ -412,18 +406,16 @@ in {
after = [ "network.target" ];
environment = carbonEnv;
serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite";
Group = "graphite";
PermissionsStartOnly = true;
PIDFile="/run/${name}/${name}.pid";
};
preStart = mkPidFileDir name + ''
mkdir -p ${cfg.dataDir}/whisper
chmod 0700 ${cfg.dataDir}/whisper
chown graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}/whisper
preStart = ''
install -dm0700 -o graphite -g graphite ${cfg.dataDir}
install -dm0700 -o graphite -g graphite ${cfg.dataDir}/whisper
'';
};
})
@ -436,12 +428,12 @@ in {
after = [ "network.target" ];
environment = carbonEnv;
serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite";
Group = "graphite";
PIDFile="/run/${name}/${name}.pid";
};
preStart = mkPidFileDir name;
};
})
@ -452,12 +444,12 @@ in {
after = [ "network.target" ];
environment = carbonEnv;
serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite";
Group = "graphite";
PIDFile="/run/${name}/${name}.pid";
};
preStart = mkPidFileDir name;
};
})
@ -485,7 +477,7 @@ in {
PYTHONPATH = let
penv = pkgs.python.buildEnv.override {
extraLibs = [
pythonPackages.graphite_web
pythonPackages.graphite-web
pythonPackages.pysqlite
];
};
@ -524,16 +516,16 @@ in {
fi
# Only collect static files when graphite_web changes.
if ! [ "${dataDir}/current_graphite_web" -ef "${pythonPackages.graphite_web}" ]; then
if ! [ "${dataDir}/current_graphite_web" -ef "${pythonPackages.graphite-web}" ]; then
mkdir -p ${staticDir}
${pkgs.pythonPackages.django_1_8}/bin/django-admin.py collectstatic --noinput --clear
chown -R graphite:graphite ${staticDir}
ln -sfT "${pythonPackages.graphite_web}" "${dataDir}/current_graphite_web"
ln -sfT "${pythonPackages.graphite-web}" "${dataDir}/current_graphite_web"
fi
'';
};
environment.systemPackages = [ pythonPackages.graphite_web ];
environment.systemPackages = [ pythonPackages.graphite-web ];
}))
(mkIf cfg.api.enable {
@ -607,7 +599,7 @@ in {
GRAPHITE_URL = cfg.pager.graphiteUrl;
};
serviceConfig = {
ExecStart = "${pkgs.pythonPackages.graphite_pager}/bin/graphite-pager --config ${pagerConfig}";
ExecStart = "${pkgs.pythonPackages.graphitepager}/bin/graphite-pager --config ${pagerConfig}";
User = "graphite";
Group = "graphite";
};
@ -615,7 +607,7 @@ in {
services.redis.enable = mkDefault true;
environment.systemPackages = [ pkgs.pythonPackages.graphite_pager ];
environment.systemPackages = [ pkgs.pythonPackages.graphitepager ];
})
(mkIf cfg.beacon.enable {

View File

@ -14,6 +14,10 @@ let
global = {
"plugins directory" = "${wrappedPlugins}/libexec/netdata/plugins.d ${pkgs.netdata}/libexec/netdata/plugins.d";
};
web = {
"web files owner" = "root";
"web files group" = "root";
};
};
mkConfig = generators.toINI {} (recursiveUpdate localConfig cfg.config);
configFile = pkgs.writeText "netdata.conf" (if cfg.configText != null then cfg.configText else mkConfig);

View File

@ -73,7 +73,7 @@ let
description = ''
Specify a filter for iptables to use when
<option>services.prometheus.exporters.${name}.openFirewall</option>
is true. It is used as `ip46tables -I INPUT <option>firewallFilter</option> -j ACCEPT`.
is true. It is used as `ip46tables -I nixos-fw <option>firewallFilter</option> -j nixos-fw-accept`.
'';
};
user = mkOption {
@ -116,9 +116,10 @@ let
mkExporterConf = { name, conf, serviceOpts }:
mkIf conf.enable {
networking.firewall.extraCommands = mkIf conf.openFirewall ''
ip46tables -I INPUT ${conf.firewallFilter} -j ACCEPT
'';
networking.firewall.extraCommands = mkIf conf.openFirewall (concatStrings [
"ip46tables -I nixos-fw ${conf.firewallFilter} "
"-m comment --comment ${name}-exporter -j nixos-fw-accept"
]);
systemd.services."prometheus-${name}-exporter" = mkMerge ([{
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];

View File

@ -195,6 +195,17 @@ in
};
helperd = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Enable the BeeGFS helperd.
The helpered is need for logging purposes on the client.
Disabling <literal>helperd</literal> allows for runing the client
with <literal>allowUnfree = false</literal>.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";

View File

@ -214,12 +214,10 @@ in
}
];
# Always provide a smb.conf to shut up programs like smbclient and smbspool.
environment.etc = singleton
{ source =
if cfg.enable then configFile
else pkgs.writeText "smb-dummy.conf" "# Samba is disabled.";
target = "samba/smb.conf";
};
environment.etc."samba/smb.conf".source = mkOptionDefault (
if cfg.enable then configFile
else pkgs.writeText "smb-dummy.conf" "# Samba is disabled."
);
}
(mkIf cfg.enable {

View File

@ -161,8 +161,9 @@ in
{ description = "DHCP Client";
wantedBy = [ "multi-user.target" ] ++ optional (!hasDefaultGatewaySet) "network-online.target";
after = [ "network.target" ];
wants = [ "network.target" ];
wants = [ "network.target" "systemd-udev-settle.service" ];
before = [ "network.target" ];
after = [ "systemd-udev-settle.service" ];
# Stopping dhcpcd during a reconfiguration is undesirable
# because it brings down the network interfaces configured by

View File

@ -0,0 +1,99 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.ocserv;
in
{
options.services.ocserv = {
enable = mkEnableOption "ocserv";
config = mkOption {
type = types.lines;
description = ''
Configuration content to start an OCServ server.
For a full configuration reference,please refer to the online documentation
(https://ocserv.gitlab.io/www/manual.html), the openconnect
recipes (https://github.com/openconnect/recipes) or `man ocserv`.
'';
example = ''
# configuration examples from $out/doc without explanatory comments.
# for a full reference please look at the installed man pages.
auth = "plain[passwd=./sample.passwd]"
tcp-port = 443
udp-port = 443
run-as-user = nobody
run-as-group = nogroup
socket-file = /var/run/ocserv-socket
server-cert = certs/server-cert.pem
server-key = certs/server-key.pem
keepalive = 32400
dpd = 90
mobile-dpd = 1800
switch-to-tcp-timeout = 25
try-mtu-discovery = false
cert-user-oid = 0.9.2342.19200300.100.1.1
tls-priorities = "NORMAL:%SERVER_PRECEDENCE:%COMPAT:-VERS-SSL3.0"
auth-timeout = 240
min-reauth-time = 300
max-ban-score = 80
ban-reset-time = 1200
cookie-timeout = 300
deny-roaming = false
rekey-time = 172800
rekey-method = ssl
use-occtl = true
pid-file = /var/run/ocserv.pid
device = vpns
predictable-ips = true
default-domain = example.com
ipv4-network = 192.168.1.0
ipv4-netmask = 255.255.255.0
dns = 192.168.1.2
ping-leases = false
route = 10.10.10.0/255.255.255.0
route = 192.168.0.0/255.255.0.0
no-route = 192.168.5.0/255.255.255.0
cisco-client-compat = true
dtls-legacy = true
[vhost:www.example.com]
auth = "certificate"
ca-cert = certs/ca.pem
server-cert = certs/server-cert-secp521r1.pem
server-key = cersts/certs/server-key-secp521r1.pem
ipv4-network = 192.168.2.0
ipv4-netmask = 255.255.255.0
cert-user-oid = 0.9.2342.19200300.100.1.1
'';
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.ocserv ];
environment.etc."ocserv/ocserv.conf".text = cfg.config;
security.pam.services.ocserv = {};
systemd.services.ocserv = {
description = "OpenConnect SSL VPN server";
documentation = [ "man:ocserv(8)" ];
after = [ "dbus.service" "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
PrivateTmp = true;
PIDFile = "/var/run/ocserv.pid";
ExecStart = "${pkgs.ocserv}/bin/ocserv --foreground --pid-file /var/run/ocesrv.pid --config /etc/ocserv/ocserv.conf";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
};
};
}

View File

@ -8,6 +8,7 @@ let
${optionalString cfg.userControlled.enable ''
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=${cfg.userControlled.group}
update_config=1''}
${cfg.extraConfig}
${concatStringsSep "\n" (mapAttrsToList (ssid: config: with config; let
key = if psk != null
then ''"${psk}"''
@ -165,6 +166,17 @@ in {
description = "Members of this group can control wpa_supplicant.";
};
};
extraConfig = mkOption {
type = types.str;
default = "";
example = ''
p2p_disabled=1
'';
description = ''
Extra lines appended to the configuration file.
See wpa_supplicant.conf(5) for available options.
'';
};
};
};

View File

@ -17,6 +17,15 @@ in
'';
};
options.services.zerotierone.port = mkOption {
default = 9993;
example = 9993;
type = types.int;
description = ''
Network port used by ZeroTier.
'';
};
options.services.zerotierone.package = mkOption {
default = pkgs.zerotierone;
defaultText = "pkgs.zerotierone";
@ -40,7 +49,7 @@ in
touch "/var/lib/zerotier-one/networks.d/${netId}.conf"
'') cfg.joinNetworks);
serviceConfig = {
ExecStart = "${cfg.package}/bin/zerotier-one";
ExecStart = "${cfg.package}/bin/zerotier-one -p${toString cfg.port}";
Restart = "always";
KillMode = "process";
};
@ -49,8 +58,8 @@ in
# ZeroTier does not issue DHCP leases, but some strangers might...
networking.dhcpcd.denyInterfaces = [ "zt*" ];
# ZeroTier receives UDP transmissions on port 9993 by default
networking.firewall.allowedUDPPorts = [ 9993 ];
# ZeroTier receives UDP transmissions
networking.firewall.allowedUDPPorts = [ cfg.port ];
environment.systemPackages = [ cfg.package ];
};

View File

@ -0,0 +1,194 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.certmgr;
specs = mapAttrsToList (n: v: rec {
name = n + ".json";
path = if isAttrs v then pkgs.writeText name (builtins.toJSON v) else v;
}) cfg.specs;
allSpecs = pkgs.linkFarm "certmgr.d" specs;
certmgrYaml = pkgs.writeText "certmgr.yaml" (builtins.toJSON {
dir = allSpecs;
default_remote = cfg.defaultRemote;
svcmgr = cfg.svcManager;
before = cfg.validMin;
interval = cfg.renewInterval;
inherit (cfg) metricsPort metricsAddress;
});
specPaths = map dirOf (concatMap (spec:
if isAttrs spec then
collect isString (filterAttrsRecursive (n: v: isAttrs v || n == "path") spec)
else
[ spec ]
) (attrValues cfg.specs));
preStart = ''
${concatStringsSep " \\\n" (["mkdir -p"] ++ map escapeShellArg specPaths)}
${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml} check
'';
in
{
options.services.certmgr = {
enable = mkEnableOption "certmgr";
defaultRemote = mkOption {
type = types.str;
default = "127.0.0.1:8888";
description = "The default CA host:port to use.";
};
validMin = mkOption {
default = "72h";
type = types.str;
description = "The interval before a certificate expires to start attempting to renew it.";
};
renewInterval = mkOption {
default = "30m";
type = types.str;
description = "How often to check certificate expirations and how often to update the cert_next_expires metric.";
};
metricsAddress = mkOption {
default = "127.0.0.1";
type = types.str;
description = "The address for the Prometheus HTTP endpoint.";
};
metricsPort = mkOption {
default = 9488;
type = types.ints.u16;
description = "The port for the Prometheus HTTP endpoint.";
};
specs = mkOption {
default = {};
example = literalExample ''
{
exampleCert =
let
domain = "example.com";
secret = name: "/var/lib/secrets/''${name}.pem";
in {
service = "nginx";
action = "reload";
authority = {
file.path = secret "ca";
};
certificate = {
path = secret domain;
};
private_key = {
owner = "root";
group = "root";
mode = "0600";
path = secret "''${domain}-key";
};
request = {
CN = domain;
hosts = [ "mail.''${domain}" "www.''${domain}" ];
key = {
algo = "rsa";
size = 2048;
};
names = {
O = "Example Organization";
C = "USA";
};
};
};
otherCert = "/var/certmgr/specs/other-cert.json";
}
'';
type = with types; attrsOf (either (submodule {
options = {
service = mkOption {
type = nullOr str;
default = null;
description = "The service on which to perform &lt;action&gt; after fetching.";
};
action = mkOption {
type = addCheck str (x: cfg.svcManager == "command" || elem x ["restart" "reload" "nop"]);
default = "nop";
description = "The action to take after fetching.";
};
# These ought all to be specified according to certmgr spec def.
authority = mkOption {
type = attrs;
description = "certmgr spec authority object.";
};
certificate = mkOption {
type = nullOr attrs;
description = "certmgr spec certificate object.";
};
private_key = mkOption {
type = nullOr attrs;
description = "certmgr spec private_key object.";
};
request = mkOption {
type = nullOr attrs;
description = "certmgr spec request object.";
};
};
}) path);
description = ''
Certificate specs as described by:
<link xlink:href="https://github.com/cloudflare/certmgr#certificate-specs" />
These will be added to the Nix store, so they will be world readable.
'';
};
svcManager = mkOption {
default = "systemd";
type = types.enum [ "circus" "command" "dummy" "openrc" "systemd" "sysv" ];
description = ''
This specifies the service manager to use for restarting or reloading services.
See: <link xlink:href="https://github.com/cloudflare/certmgr#certmgryaml" />.
For how to use the "command" service manager in particular,
see: <link xlink:href="https://github.com/cloudflare/certmgr#command-svcmgr-and-how-to-use-it" />.
'';
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.specs != {};
message = "Certmgr specs cannot be empty.";
}
{
assertion = !any (hasAttrByPath [ "authority" "auth_key" ]) (attrValues cfg.specs);
message = ''
Inline services.certmgr.specs are added to the Nix store rendering them world readable.
Specify paths as specs, if you want to use include auth_key - or use the auth_key_file option."
'';
}
];
systemd.services.certmgr = {
description = "certmgr";
path = mkIf (cfg.svcManager == "command") [ pkgs.bash ];
after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
inherit preStart;
serviceConfig = {
Restart = "always";
RestartSec = "10s";
ExecStart = "${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml}";
};
};
};
}

View File

@ -0,0 +1,209 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.cfssl;
in {
options.services.cfssl = {
enable = mkEnableOption "the CFSSL CA api-server";
dataDir = mkOption {
default = "/var/lib/cfssl";
type = types.path;
description = "Cfssl work directory.";
};
address = mkOption {
default = "127.0.0.1";
type = types.str;
description = "Address to bind.";
};
port = mkOption {
default = 8888;
type = types.ints.u16;
description = "Port to bind.";
};
ca = mkOption {
defaultText = "\${cfg.dataDir}/ca.pem";
type = types.str;
description = "CA used to sign the new certificate -- accepts '[file:]fname' or 'env:varname'.";
};
caKey = mkOption {
defaultText = "file:\${cfg.dataDir}/ca-key.pem";
type = types.str;
description = "CA private key -- accepts '[file:]fname' or 'env:varname'.";
};
caBundle = mkOption {
default = null;
type = types.nullOr types.path;
description = "Path to root certificate store.";
};
intBundle = mkOption {
default = null;
type = types.nullOr types.path;
description = "Path to intermediate certificate store.";
};
intDir = mkOption {
default = null;
type = types.nullOr types.path;
description = "Intermediates directory.";
};
metadata = mkOption {
default = null;
type = types.nullOr types.path;
description = ''
Metadata file for root certificate presence.
The content of the file is a json dictionary (k,v): each key k is
a SHA-1 digest of a root certificate while value v is a list of key
store filenames.
'';
};
remote = mkOption {
default = null;
type = types.nullOr types.str;
description = "Remote CFSSL server.";
};
configFile = mkOption {
default = null;
type = types.nullOr types.str;
description = "Path to configuration file. Do not put this in nix-store as it might contain secrets.";
};
responder = mkOption {
default = null;
type = types.nullOr types.path;
description = "Certificate for OCSP responder.";
};
responderKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Private key for OCSP responder certificate. Do not put this in nix-store.";
};
tlsKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Other endpoint's CA private key. Do not put this in nix-store.";
};
tlsCert = mkOption {
default = null;
type = types.nullOr types.path;
description = "Other endpoint's CA to set up TLS protocol.";
};
mutualTlsCa = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - require clients be signed by this CA.";
};
mutualTlsCn = mkOption {
default = null;
type = types.nullOr types.str;
description = "Mutual TLS - regex for whitelist of allowed client CNs.";
};
tlsRemoteCa = mkOption {
default = null;
type = types.nullOr types.path;
description = "CAs to trust for remote TLS requests.";
};
mutualTlsClientCert = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - client certificate to call remote instance requiring client certs.";
};
mutualTlsClientKey = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - client key to call remote instance requiring client certs. Do not put this in nix-store.";
};
dbConfig = mkOption {
default = null;
type = types.nullOr types.path;
description = "Certificate db configuration file. Path must be writeable.";
};
logLevel = mkOption {
default = 1;
type = types.enum [ 0 1 2 3 4 5 ];
description = "Log level (0 = DEBUG, 5 = FATAL).";
};
};
config = mkIf cfg.enable {
users.extraGroups.cfssl = {
gid = config.ids.gids.cfssl;
};
users.extraUsers.cfssl = {
description = "cfssl user";
createHome = true;
home = cfg.dataDir;
group = "cfssl";
uid = config.ids.uids.cfssl;
};
systemd.services.cfssl = {
description = "CFSSL CA API server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
WorkingDirectory = cfg.dataDir;
StateDirectory = cfg.dataDir;
StateDirectoryMode = 700;
Restart = "always";
User = "cfssl";
ExecStart = with cfg; let
opt = n: v: optionalString (v != null) ''-${n}="${v}"'';
in
lib.concatStringsSep " \\\n" [
"${pkgs.cfssl}/bin/cfssl serve"
(opt "address" address)
(opt "port" (toString port))
(opt "ca" ca)
(opt "ca-key" caKey)
(opt "ca-bundle" caBundle)
(opt "int-bundle" intBundle)
(opt "int-dir" intDir)
(opt "metadata" metadata)
(opt "remote" remote)
(opt "config" configFile)
(opt "responder" responder)
(opt "responder-key" responderKey)
(opt "tls-key" tlsKey)
(opt "tls-cert" tlsCert)
(opt "mutual-tls-ca" mutualTlsCa)
(opt "mutual-tls-cn" mutualTlsCn)
(opt "mutual-tls-client-key" mutualTlsClientKey)
(opt "mutual-tls-client-cert" mutualTlsClientCert)
(opt "tls-remote-ca" tlsRemoteCa)
(opt "db-config" dbConfig)
(opt "loglevel" (toString logLevel))
];
};
};
services.cfssl = {
ca = mkDefault "${cfg.dataDir}/ca.pem";
caKey = mkDefault "${cfg.dataDir}/ca-key.pem";
};
};
}

View File

@ -1,6 +1,7 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.vault;
@ -24,15 +25,22 @@ let
${cfg.telemetryConfig}
}
''}
${cfg.extraConfig}
'';
in
{
options = {
services.vault = {
enable = mkEnableOption "Vault daemon";
package = mkOption {
type = types.package;
default = pkgs.vault;
defaultText = "pkgs.vault";
description = "This option specifies the vault package to use.";
};
address = mkOption {
type = types.str;
default = "127.0.0.1:8200";
@ -58,7 +66,7 @@ in
default = ''
tls_min_version = "tls12"
'';
description = "extra configuration";
description = "Extra text appended to the listener section.";
};
storageBackend = mkOption {
@ -84,6 +92,12 @@ in
default = "";
description = "Telemetry configuration";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Extra text appended to <filename>vault.hcl</filename>.";
};
};
};
@ -122,7 +136,7 @@ in
User = "vault";
Group = "vault";
PermissionsStartOnly = true;
ExecStart = "${pkgs.vault}/bin/vault server -config ${configFile}";
ExecStart = "${cfg.package}/bin/vault server -config ${configFile}";
PrivateDevices = true;
PrivateTmp = true;
ProtectSystem = "full";

View File

@ -104,8 +104,9 @@ in
systemd.services.cloud-init =
{ description = "Initial cloud-init job (metadata service crawler)";
wantedBy = [ "multi-user.target" ];
wants = [ "local-fs.target" "cloud-init-local.service" "sshd.service" "sshd-keygen.service" ];
after = [ "local-fs.target" "network.target" "cloud-init-local.service" ];
wants = [ "local-fs.target" "network-online.target" "cloud-init-local.service"
"sshd.service" "sshd-keygen.service" ];
after = [ "local-fs.target" "network-online.target" "cloud-init-local.service" ];
before = [ "sshd.service" "sshd-keygen.service" ];
requires = [ "network.target "];
path = path;
@ -121,8 +122,8 @@ in
systemd.services.cloud-config =
{ description = "Apply the settings specified in cloud-config";
wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ];
after = [ "network.target" "syslog.target" "cloud-config.target" ];
wants = [ "network-online.target" ];
after = [ "network-online.target" "syslog.target" "cloud-config.target" ];
path = path;
serviceConfig =
@ -137,8 +138,8 @@ in
systemd.services.cloud-final =
{ description = "Execute cloud user/final scripts";
wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ];
after = [ "network.target" "syslog.target" "cloud-config.service" "rc-local.service" ];
wants = [ "network-online.target" ];
after = [ "network-online.target" "syslog.target" "cloud-config.service" "rc-local.service" ];
requires = [ "cloud-config.target" ];
path = path;
serviceConfig =

View File

@ -22,14 +22,8 @@ in {
config = mkIf cfg.enable {
services.geoclue2.enable = true;
security.polkit.extraConfig = ''
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.timedate1.set-timezone"
&& subject.user == "localtimed") {
return polkit.Result.YES;
}
});
'';
# so polkit will pick up the rules
environment.systemPackages = [ pkgs.localtime ];
users.users = [{
name = "localtimed";

View File

@ -118,14 +118,14 @@ in
systemd.services.youtrack = {
environment.HOME = cfg.statePath;
environment.YOUTRACK_JVM_OPTS = "-Xmx${cfg.maxMemory} -XX:MaxMetaspaceSize=${cfg.maxMetaspaceSize} ${cfg.jvmOpts} ${extraAttr}";
environment.YOUTRACK_JVM_OPTS = "${extraAttr}";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
User = "youtrack";
Group = "youtrack";
ExecStart = ''${cfg.package}/bin/youtrack ${cfg.address}:${toString cfg.port}'';
ExecStart = ''${cfg.package}/bin/youtrack --J-Xmx${cfg.maxMemory} --J-XX:MaxMetaspaceSize=${cfg.maxMetaspaceSize} ${cfg.jvmOpts} ${cfg.address}:${toString cfg.port}'';
};
};

View File

@ -1,6 +1,8 @@
{ config, lib, pkgs, ... }:
let cfg = config.services.hydron;
let
cfg = config.services.hydron;
postgres = config.services.postgresql;
in with lib; {
options.services.hydron = {
enable = mkEnableOption "hydron";
@ -14,10 +16,10 @@ in with lib; {
interval = mkOption {
type = types.str;
default = "hourly";
default = "weekly";
example = "06:00";
description = ''
How often we run hydron import and possibly fetch tags. Runs by default every hour.
How often we run hydron import and possibly fetch tags. Runs by default every week.
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
@ -25,6 +27,38 @@ in with lib; {
'';
};
password = mkOption {
type = types.str;
default = "hydron";
example = "dumbpass";
description = "Password for the hydron database.";
};
passwordFile = mkOption {
type = types.path;
default = "/run/keys/hydron-password-file";
example = "/home/okina/hydron/keys/pass";
description = "Password file for the hydron database.";
};
postgresArgs = mkOption {
type = types.str;
description = "Postgresql connection arguments.";
example = ''
{
"driver": "postgres",
"connection": "user=hydron password=dumbpass dbname=hydron sslmode=disable"
}
'';
};
postgresArgsFile = mkOption {
type = types.path;
default = "/run/keys/hydron-postgres-args";
example = "/home/okina/hydron/keys/postgres";
description = "Postgresql connection arguments file.";
};
listenAddress = mkOption {
type = types.nullOr types.str;
default = null;
@ -47,16 +81,36 @@ in with lib; {
};
config = mkIf cfg.enable {
security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable;
services.hydron.passwordFile = mkDefault (pkgs.writeText "hydron-password-file" cfg.password);
services.hydron.postgresArgsFile = mkDefault (pkgs.writeText "hydron-postgres-args" cfg.postgresArgs);
services.hydron.postgresArgs = mkDefault ''
{
"driver": "postgres",
"connection": "user=hydron password=${cfg.password} dbname=hydron sslmode=disable"
}
'';
systemd.services.hydron = {
description = "hydron";
after = [ "network.target" ];
after = [ "network.target" "postgresql.service" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
# Ensure folder exists and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}/images
# Ensure folder exists or create it and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}/{.hydron,images}
ln -sf ${escapeShellArg cfg.postgresArgsFile} ${escapeShellArg cfg.dataDir}/.hydron/db_conf.json
chmod 750 ${escapeShellArg cfg.dataDir}
chown -R hydron:hydron ${escapeShellArg cfg.dataDir}
# Ensure the database is correct or create it
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \
-SDR hydron || true
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \
-T template0 -E UTF8 -O hydron hydron || true
${pkgs.sudo}/bin/sudo -u hydron ${postgres.package}/bin/psql \
-c "ALTER ROLE hydron WITH PASSWORD '$(cat ${escapeShellArg cfg.passwordFile})';" || true
'';
serviceConfig = {
@ -83,9 +137,13 @@ in with lib; {
systemd.timers.hydron-fetch = {
description = "Automatically import paths into hydron and possibly fetch tags";
after = [ "network.target" ];
after = [ "network.target" "hydron.service" ];
wantedBy = [ "timers.target" ];
timerConfig.OnCalendar = cfg.interval;
timerConfig = {
Persistent = true;
OnCalendar = cfg.interval;
};
};
users = {
@ -101,5 +159,9 @@ in with lib; {
};
};
imports = [
(mkRenamedOptionModule [ "services" "hydron" "baseDir" ] [ "services" "hydron" "dataDir" ])
];
meta.maintainers = with maintainers; [ chiiruno ];
}

View File

@ -1,65 +1,71 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.meguca;
postgres = config.services.postgresql;
in
{
in with lib; {
options.services.meguca = {
enable = mkEnableOption "meguca";
baseDir = mkOption {
dataDir = mkOption {
type = types.path;
default = "/run/meguca";
default = "/var/lib/meguca";
example = "/home/okina/meguca";
description = "Location where meguca stores it's database and links.";
};
password = mkOption {
type = types.str;
default = "meguca";
example = "dumbpass";
description = "Password for the meguca database.";
};
passwordFile = mkOption {
type = types.path;
default = "/run/keys/meguca-password-file";
example = "/home/okina/meguca/keys/pass";
description = "Password file for the meguca database.";
};
reverseProxy = mkOption {
type = types.nullOr types.str;
default = null;
example = "192.168.1.5";
description = "Reverse proxy IP.";
};
sslCertificate = mkOption {
type = types.nullOr types.str;
default = null;
example = "/home/okina/meguca/ssl.cert";
description = "Path to the SSL certificate.";
};
listenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "127.0.0.1:8000";
description = "Listen on a specific IP address and port.";
};
cacheSize = mkOption {
type = types.nullOr types.int;
default = null;
example = 256;
description = "Cache size in MB.";
};
postgresArgs = mkOption {
type = types.str;
default = "user=meguca password=" + cfg.password + " dbname=meguca sslmode=disable";
example = "user=meguca password=dumbpass dbname=meguca sslmode=disable";
description = "Postgresql connection arguments.";
};
postgresArgsFile = mkOption {
type = types.path;
default = "/run/keys/meguca-postgres-args";
example = "/home/okina/meguca/keys/postgres";
description = "Postgresql connection arguments file.";
};
@ -83,18 +89,11 @@ in
};
config = mkIf cfg.enable {
security.sudo.enable = cfg.enable == true;
services.postgresql.enable = cfg.enable == true;
services.meguca.passwordFile = mkDefault (toString (pkgs.writeTextFile {
name = "meguca-password-file";
text = cfg.password;
}));
services.meguca.postgresArgsFile = mkDefault (toString (pkgs.writeTextFile {
name = "meguca-postgres-args";
text = cfg.postgresArgs;
}));
security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable;
services.meguca.passwordFile = mkDefault (pkgs.writeText "meguca-password-file" cfg.password);
services.meguca.postgresArgsFile = mkDefault (pkgs.writeText "meguca-postgres-args" cfg.postgresArgs);
services.meguca.postgresArgs = mkDefault "user=meguca password=${cfg.password} dbname=meguca sslmode=disable";
systemd.services.meguca = {
description = "meguca";
@ -102,10 +101,11 @@ in
wantedBy = [ "multi-user.target" ];
preStart = ''
# Ensure folder exists and links are correct or create them
mkdir -p ${cfg.baseDir}
chmod 750 ${cfg.baseDir}
ln -sf ${pkgs.meguca}/share/meguca/www ${cfg.baseDir}
# Ensure folder exists or create it and links and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}
ln -sf ${pkgs.meguca}/share/meguca/www ${escapeShellArg cfg.dataDir}
chmod 750 ${escapeShellArg cfg.dataDir}
chown -R meguca:meguca ${escapeShellArg cfg.dataDir}
# Ensure the database is correct or create it
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \
@ -113,47 +113,46 @@ in
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \
-T template0 -E UTF8 -O meguca meguca || true
${pkgs.sudo}/bin/sudo -u meguca ${postgres.package}/bin/psql \
-c "ALTER ROLE meguca WITH PASSWORD '$(cat ${cfg.passwordFile})';" || true
-c "ALTER ROLE meguca WITH PASSWORD '$(cat ${escapeShellArg cfg.passwordFile})';" || true
'';
script = ''
cd ${cfg.baseDir}
cd ${escapeShellArg cfg.dataDir}
${pkgs.meguca}/bin/meguca -d "$(cat ${cfg.postgresArgsFile})"\
${optionalString (cfg.reverseProxy != null) " -R ${cfg.reverseProxy}"}\
${optionalString (cfg.sslCertificate != null) " -S ${cfg.sslCertificate}"}\
${optionalString (cfg.listenAddress != null) " -a ${cfg.listenAddress}"}\
${optionalString (cfg.cacheSize != null) " -c ${toString cfg.cacheSize}"}\
${optionalString (cfg.compressTraffic) " -g"}\
${optionalString (cfg.assumeReverseProxy) " -r"}\
${optionalString (cfg.httpsOnly) " -s"} start
'';
${pkgs.meguca}/bin/meguca -d "$(cat ${escapeShellArg cfg.postgresArgsFile})"''
+ optionalString (cfg.reverseProxy != null) " -R ${cfg.reverseProxy}"
+ optionalString (cfg.sslCertificate != null) " -S ${cfg.sslCertificate}"
+ optionalString (cfg.listenAddress != null) " -a ${cfg.listenAddress}"
+ optionalString (cfg.cacheSize != null) " -c ${toString cfg.cacheSize}"
+ optionalString (cfg.compressTraffic) " -g"
+ optionalString (cfg.assumeReverseProxy) " -r"
+ optionalString (cfg.httpsOnly) " -s" + " start";
serviceConfig = {
PermissionsStartOnly = true;
Type = "forking";
User = "meguca";
Group = "meguca";
RuntimeDirectory = "meguca";
ExecStop = "${pkgs.meguca}/bin/meguca stop";
};
};
users = {
groups.meguca.gid = config.ids.gids.meguca;
users.meguca = {
description = "meguca server service user";
home = cfg.baseDir;
home = cfg.dataDir;
createHome = true;
group = "meguca";
uid = config.ids.uids.meguca;
};
groups.meguca = {
gid = config.ids.gids.meguca;
members = [ "meguca" ];
};
};
};
imports = [
(mkRenamedOptionModule [ "services" "meguca" "baseDir" ] [ "services" "meguca" "dataDir" ])
];
meta.maintainers = with maintainers; [ chiiruno ];
}

View File

@ -108,7 +108,7 @@ in
};
webapps = mkOption {
type = types.listOf types.package;
type = types.listOf types.path;
default = [ tomcat.webapps ];
defaultText = "[ pkgs.tomcat85.webapps ]";
description = "List containing WAR files or directories with WAR files which are web applications to be deployed on Tomcat";
@ -118,8 +118,15 @@ in
type = types.listOf (types.submodule {
options = {
name = mkOption {
type = types.listOf types.str;
type = types.str;
description = "name of the virtualhost";
};
webapps = mkOption {
type = types.listOf types.path;
description = ''
List containing web application WAR files and/or directories containing
web applications and configuration files for the virtual host.
'';
default = [];
};
};

View File

@ -96,13 +96,13 @@ in
else if any (w: w.name == defaultDM) cfg.session.list then
defaultDM
else
throw ''
Default desktop manager (${defaultDM}) not found.
Probably you want to change
services.xserver.desktopManager.default = "${defaultDM}";
to one of
builtins.trace ''
Default desktop manager (${defaultDM}) not found at evaluation time.
These are the known valid session names:
${concatMapStringsSep "\n " (w: "services.xserver.desktopManager.default = \"${w.name}\";") cfg.session.list}
'';
It's also possible the default can be found in one of these packages:
${concatMapStringsSep "\n " (p: p.name) config.services.xserver.displayManager.extraSessionFilePackages}
'' defaultDM;
};
};

View File

@ -57,8 +57,12 @@ in {
sessionPath = mkOption {
default = [];
example = literalExample "[ pkgs.gnome3.gpaste ]";
description = "Additional list of packages to be added to the session search path.
Useful for gnome shell extensions or gsettings-conditionated autostart.";
description = ''
Additional list of packages to be added to the session search path.
Useful for GNOME Shell extensions or GSettings-conditional autostart.
Note that this should be a last resort; patching the package is preferred (see GPaste).
'';
apply = list: list ++ [ pkgs.gnome3.gnome-shell pkgs.gnome3.gnome-shell-extensions ];
};
@ -93,6 +97,8 @@ in {
services.udisks2.enable = true;
services.accounts-daemon.enable = true;
services.geoclue2.enable = mkDefault true;
# GNOME should have its own geoclue agent
services.geoclue2.enableDemoAgent = false;
services.dleyna-renderer.enable = mkDefault true;
services.dleyna-server.enable = mkDefault true;
services.gnome3.at-spi2-core.enable = true;
@ -126,18 +132,10 @@ in {
fonts.fonts = [ pkgs.dejavu_fonts pkgs.cantarell-fonts ];
services.xserver.desktopManager.session = singleton
{ name = "gnome3";
bgSupport = true;
start = ''
# Set GTK_DATA_PREFIX so that GTK+ can find the themes
export GTK_DATA_PREFIX=${config.system.path}
# find theme engines
export GTK_PATH=${config.system.path}/lib/gtk-3.0:${config.system.path}/lib/gtk-2.0
export XDG_MENU_PREFIX=gnome-
services.xserver.displayManager.extraSessionFilePackages = [ pkgs.gnome3.gnome-session ];
services.xserver.displayManager.sessionCommands = ''
if test "$XDG_CURRENT_DESKTOP" = "GNOME"; then
${concatMapStrings (p: ''
if [ -d "${p}/share/gsettings-schemas/${p.name}" ]; then
export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${p}/share/gsettings-schemas/${p.name}
@ -148,34 +146,28 @@ in {
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH''${LD_LIBRARY_PATH:+:}${p}/lib
fi
'') cfg.sessionPath}
fi
'';
# Override default mimeapps
export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${mimeAppsList}/share
environment.variables.GNOME_SESSION_DEBUG = optionalString cfg.debug "1";
# Override gsettings-desktop-schema
export NIX_GSETTINGS_OVERRIDES_DIR=${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas
# Override default mimeapps
environment.variables.XDG_DATA_DIRS = [ "${mimeAppsList}/share" ];
# Let nautilus find extensions
export NAUTILUS_EXTENSION_DIR=${config.system.path}/lib/nautilus/extensions-3.0/
# Override GSettings schemas
environment.variables.NIX_GSETTINGS_OVERRIDES_DIR = "${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas";
# Find the mouse
export XCURSOR_PATH=~/.icons:${config.system.path}/share/icons
# Update user dirs as described in http://freedesktop.org/wiki/Software/xdg-user-dirs/
${pkgs.xdg-user-dirs}/bin/xdg-user-dirs-update
${pkgs.gnome3.gnome-session}/bin/gnome-session ${optionalString cfg.debug "--debug"} &
waitPID=$!
'';
};
services.xserver.updateDbusEnvironment = true;
# Let nautilus find extensions
# TODO: Create nautilus-with-extensions package
environment.variables.NAUTILUS_EXTENSION_DIR = "${config.system.path}/lib/nautilus/extensions-3.0";
environment.variables.GIO_EXTRA_MODULES = [ "${lib.getLib pkgs.gnome3.dconf}/lib/gio/modules"
"${pkgs.gnome3.glib-networking.out}/lib/gio/modules"
"${pkgs.gnome3.gvfs}/lib/gio/modules" ];
environment.systemPackages = pkgs.gnome3.corePackages ++ cfg.sessionPath
++ (removePackagesByName pkgs.gnome3.optionalPackages config.environment.gnome3.excludePackages);
++ (removePackagesByName pkgs.gnome3.optionalPackages config.environment.gnome3.excludePackages) ++ [
pkgs.xdg-user-dirs # Update user dirs as described in http://freedesktop.org/wiki/Software/xdg-user-dirs/
];
# Use the correct gnome3 packageSet
networking.networkmanager.basePackages =

View File

@ -224,7 +224,7 @@ in
# Update the start menu for each user that has `isNormalUser` set.
system.activationScripts.plasmaSetup = stringAfter [ "users" "groups" ]
(concatStringsSep "\n"
(mapAttrsToList (name: value: "${pkgs.su}/bin/su ${name} -c kbuildsycoca5")
(mapAttrsToList (name: value: "${pkgs.su}/bin/su ${name} -c ${pkgs.libsForQt5.kservice}/bin/kbuildsycoca5")
(filterAttrs (n: v: v.isNormalUser) config.users.users)));
})
];

View File

@ -27,55 +27,26 @@ let
Xft.hintstyle: hintslight
'';
# file provided by services.xserver.displayManager.session.script
xsession = wm: dm: pkgs.writeScript "xsession"
# file provided by services.xserver.displayManager.session.wrapper
xsessionWrapper = pkgs.writeScript "xsession-wrapper"
''
#! ${pkgs.bash}/bin/bash
# Expected parameters:
# $1 = <desktop-manager>+<window-manager>
# Actual parameters (FIXME):
# SDDM is calling this script like the following:
# $1 = /nix/store/xxx-xsession (= $0)
# $2 = <desktop-manager>+<window-manager>
# SLiM is using the following parameter:
# $1 = /nix/store/xxx-xsession <desktop-manager>+<window-manager>
# LightDM keeps the double quotes:
# $1 = /nix/store/xxx-xsession "<desktop-manager>+<window-manager>"
# The fake/auto display manager doesn't use any parameters and GDM is
# broken.
# If you want to "debug" this script don't print the parameters to stdout
# or stderr because this script will be executed multiple times and the
# output won't be visible in the log when the script is executed for the
# first time (e.g. append them to a file instead)!
# All of the above cases are handled by the following hack (FIXME).
# Since this line is *very important* for *all display managers* it is
# very important to test changes to the following line with all display
# managers:
if [ "''${1:0:1}" = "/" ]; then eval exec "$1" "$2" ; fi
# Now it should be safe to assume that the script was called with the
# expected parameters.
# Shared environment setup for graphical sessions.
. /etc/profile
cd "$HOME"
# The first argument of this script is the session type.
sessionType="$1"
if [ "$sessionType" = default ]; then sessionType=""; fi
${optionalString cfg.startDbusSession ''
if test -z "$DBUS_SESSION_BUS_ADDRESS"; then
exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$sessionType"
exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$@"
fi
''}
${optionalString cfg.displayManager.job.logToJournal ''
if [ -z "$_DID_SYSTEMD_CAT" ]; then
export _DID_SYSTEMD_CAT=1
exec ${config.systemd.package}/bin/systemd-cat -t xsession "$0" "$sessionType"
exec ${config.systemd.package}/bin/systemd-cat -t xsession "$0" "$@"
fi
''}
@ -85,12 +56,10 @@ let
# Start PulseAudio if enabled.
${optionalString (config.hardware.pulseaudio.enable) ''
${optionalString (!config.hardware.pulseaudio.systemWide)
"${config.hardware.pulseaudio.package.out}/bin/pulseaudio --start"
}
# Publish access credentials in the root window.
${config.hardware.pulseaudio.package.out}/bin/pactl load-module module-x11-publish "display=$DISPLAY"
if ${config.hardware.pulseaudio.package.out}/bin/pulseaudio --dump-modules | grep module-x11-publish &> /dev/null; then
${config.hardware.pulseaudio.package.out}/bin/pactl load-module module-x11-publish "display=$DISPLAY"
fi
''}
# Tell systemd about our $DISPLAY and $XAUTHORITY.
@ -101,6 +70,7 @@ let
${config.systemd.package}/bin/systemctl --user import-environment DISPLAY XAUTHORITY DBUS_SESSION_BUS_ADDRESS
# Load X defaults.
# FIXME: Check XDG_SESSION_TYPE against x11
${xorg.xrdb}/bin/xrdb -merge ${xresourcesXft}
if test -e ~/.Xresources; then
${xorg.xrdb}/bin/xrdb -merge ~/.Xresources
@ -132,12 +102,33 @@ let
# Allow the user to setup a custom session type.
if test -x ~/.xsession; then
exec ~/.xsession
else
if test "$sessionType" = "custom"; then
sessionType="" # fall-thru if there is no ~/.xsession
fi
fi
if test "$1"; then
# Run the supplied session command. Remove any double quotes with eval.
eval exec "$@"
else
# Fall back to the default window/desktopManager
exec ${cfg.displayManager.session.script}
fi
'';
# file provided by services.xserver.displayManager.session.script
xsession = wm: dm: pkgs.writeScript "xsession"
''
#! ${pkgs.bash}/bin/bash
# Legacy session script used to construct .desktop files from
# `services.xserver.displayManager.session` entries. Called from
# `sessionWrapper`.
# Expected parameters:
# $1 = <desktop-manager>+<window-manager>
# The first argument of this script is the session type.
sessionType="$1"
if [ "$sessionType" = default ]; then sessionType=""; fi
# The session type is "<desktop-manager>+<window-manager>", so
# extract those (see:
# http://wiki.bash-hackers.org/syntax/pe#substring_removal).
@ -186,19 +177,22 @@ let
allowSubstitutes = false;
}
''
mkdir -p "$out"
mkdir -p "$out/share/xsessions"
${concatMapStrings (n: ''
cat - > "$out/${n}.desktop" << EODESKTOP
cat - > "$out/share/xsessions/${n}.desktop" << EODESKTOP
[Desktop Entry]
Version=1.0
Type=XSession
TryExec=${cfg.displayManager.session.script}
Exec=${cfg.displayManager.session.script} "${n}"
X-GDM-BypassXsession=true
Name=${n}
Comment=
EODESKTOP
'') names}
${concatMapStrings (pkg: ''
${xorg.lndir}/bin/lndir ${pkg}/share/xsessions $out/share/xsessions
'') cfg.displayManager.extraSessionFilePackages}
'';
in
@ -245,6 +239,14 @@ in
'';
};
extraSessionFilePackages = mkOption {
type = types.listOf types.package;
default = [];
description = ''
A list of packages containing xsession files to be passed to the display manager.
'';
};
session = mkOption {
default = [];
example = literalExample
@ -280,6 +282,7 @@ in
(filter (w: d.name != "none" || w.name != "none") wm));
desktops = mkDesktops names;
script = xsession wm dm;
wrapper = xsessionWrapper;
};
};

View File

@ -109,7 +109,7 @@ in
environment = {
GDM_X_SERVER_EXTRA_ARGS = toString
(filter (arg: arg != "-terminate") cfg.xserverArgs);
GDM_SESSIONS_DIR = "${cfg.session.desktops}";
GDM_SESSIONS_DIR = "${cfg.session.desktops}/share/xsessions";
# Find the mouse
XCURSOR_PATH = "~/.icons:${pkgs.gnome3.adwaita-icon-theme}/share/icons";
};
@ -173,6 +173,8 @@ in
${optionalString cfg.gdm.debug "Enable=true"}
'';
environment.etc."gdm/Xsession".source = config.services.xserver.displayManager.session.wrapper;
# GDM LFS PAM modules, adapted somehow to NixOS
security.pam.services = {
gdm-launch-environment.text = ''

View File

@ -23,7 +23,7 @@ let
makeWrapper ${pkgs.lightdm_gtk_greeter}/sbin/lightdm-gtk-greeter \
$out/greeter \
--prefix PATH : "${pkgs.glibc.bin}/bin" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.gdk_pixbuf.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GTK_PATH "${theme}:${pkgs.gtk3.out}" \
--set GTK_EXE_PREFIX "${theme}" \
--set GTK_DATA_PREFIX "${theme}" \

View File

@ -15,7 +15,7 @@ let
inherit (pkgs) lightdm writeScript writeText;
# lightdm runs with clearenv(), but we need a few things in the enviornment for X to startup
# lightdm runs with clearenv(), but we need a few things in the environment for X to startup
xserverWrapper = writeScript "xserver-wrapper"
''
#! ${pkgs.bash}/bin/bash
@ -45,11 +45,11 @@ let
greeter-user = ${config.users.users.lightdm.name}
greeters-directory = ${cfg.greeter.package}
''}
sessions-directory = ${dmcfg.session.desktops}
sessions-directory = ${dmcfg.session.desktops}/share/xsessions
[Seat:*]
xserver-command = ${xserverWrapper}
session-wrapper = ${dmcfg.session.script}
session-wrapper = ${dmcfg.session.wrapper}
${optionalString cfg.greeter.enable ''
greeter-session = ${cfg.greeter.name}
''}
@ -176,21 +176,13 @@ in
LightDM auto-login requires services.xserver.displayManager.lightdm.autoLogin.user to be set
'';
}
{ assertion = cfg.autoLogin.enable -> elem defaultSessionName dmcfg.session.names;
{ assertion = cfg.autoLogin.enable -> dmDefault != "none" || wmDefault != "none";
message = ''
LightDM auto-login requires that services.xserver.desktopManager.default and
services.xserver.windowMananger.default are set to valid values. The current
default session: ${defaultSessionName} is not valid.
'';
}
{ assertion = hasDefaultUserSession -> elem defaultSessionName dmcfg.session.names;
message = ''
services.xserver.desktopManager.default and
services.xserver.windowMananger.default are not set to valid
values. The current default session: ${defaultSessionName}
is not valid.
'';
}
{ assertion = !cfg.greeter.enable -> (cfg.autoLogin.enable && cfg.autoLogin.timeout == 0);
message = ''
LightDM can only run without greeter if automatic login is enabled and the timeout for it
@ -217,9 +209,12 @@ in
services.dbus.enable = true;
services.dbus.packages = [ lightdm ];
# lightdm uses the accounts daemon to rember language/window-manager per user
# lightdm uses the accounts daemon to remember language/window-manager per user
services.accounts-daemon.enable = true;
# Enable the accounts daemon to find lightdm's dbus interface
environment.systemPackages = [ lightdm ];
security.pam.services.lightdm = {
allowNullPassword = true;
startSession = true;

View File

@ -49,8 +49,8 @@ let
MinimumVT=${toString (if xcfg.tty != null then xcfg.tty else 7)}
ServerPath=${xserverWrapper}
XephyrPath=${pkgs.xorg.xorgserver.out}/bin/Xephyr
SessionCommand=${dmcfg.session.script}
SessionDir=${dmcfg.session.desktops}
SessionCommand=${dmcfg.session.wrapper}
SessionDir=${dmcfg.session.desktops}/share/xsessions
XauthPath=${pkgs.xorg.xauth}/bin/xauth
DisplayCommand=${Xsetup}
DisplayStopCommand=${Xstop}
@ -265,6 +265,7 @@ in
};
environment.etc."sddm.conf".source = cfgFile;
environment.pathsToLink = [ "/share/sddm/themes" ];
users.groups.sddm.gid = config.ids.gids.sddm;

Some files were not shown because too many files have changed in this diff Show More