Merge branch 'staging-next' into staging

This commit is contained in:
Vladimír Čunát 2019-05-26 09:48:55 +02:00
commit b4ae841b23
No known key found for this signature in database
GPG Key ID: E747DF1F9575A3AA
717 changed files with 15953 additions and 13514 deletions

View File

@ -953,7 +953,7 @@ is essentially a "free software" license (BSD3), according to
paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL! paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL!
To work around these problems GHC can be build with a slower but LGPL-free To work around these problems GHC can be build with a slower but LGPL-free
alternative implemention for Integer called alternative implementation for Integer called
[integer-simple](http://hackage.haskell.org/package/integer-simple). [integer-simple](http://hackage.haskell.org/package/integer-simple).
To get a GHC compiler build with `integer-simple` instead of `integer-gmp` use To get a GHC compiler build with `integer-simple` instead of `integer-gmp` use

View File

@ -1,12 +1,13 @@
<book xmlns="http://docbook.org/ns/docbook" <book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"> xmlns:xi="http://www.w3.org/2001/XInclude">
<info> <info>
<title>Nixpkgs Contributors Guide</title> <title>Nixpkgs Users and Contributors Guide</title>
<subtitle>Version <xi:include href=".version" parse="text" /> <subtitle>Version <xi:include href=".version" parse="text" />
</subtitle> </subtitle>
</info> </info>
<xi:include href="introduction.chapter.xml" /> <xi:include href="introduction.chapter.xml" />
<xi:include href="quick-start.xml" /> <xi:include href="quick-start.xml" />
<xi:include href="package-specific-user-notes.xml" />
<xi:include href="stdenv.xml" /> <xi:include href="stdenv.xml" />
<xi:include href="multiple-output.xml" /> <xi:include href="multiple-output.xml" />
<xi:include href="cross-compilation.xml" /> <xi:include href="cross-compilation.xml" />

View File

@ -352,312 +352,6 @@ packageOverrides = pkgs: {
</screen> </screen>
</para> </para>
</section> </section>
<section xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as
an i686 package (the amd64 package only has documentation). When unpacked,
it has a script called <filename>steam</filename> that in ubuntu (their
target distro) would go to <filename>/usr/bin </filename>. When run for the
first time, this script copies some files to the user's home, which include
another script that is the ultimate responsible for launching the steam
binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point
there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as
it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible
chroot environment, as documented
<link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>.
This allows us to have binaries in the expected paths without disrupting
the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration.
To use the Steam controller or other Steam supported controllers such as
the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS
17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that
conflics with the one dynamically loaded by radeonsi_dri.so. If you
get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this
pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message
like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other
linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install that
packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages. It will
not <literal>configure</literal> them for us. To do this, we need to
provide a configuration file. Luckily, it is possible to do this from
within Nix! By modifying the above example, we can make Emacs load a custom
config file. The key is to create a package that provide a
<filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load this
file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to
the user's presonal config. You can always disable it by passing
<command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this
package set has some priorities imposed on packages (with the lowest
priority assigned to Melpa Unstable, and the highest for packages manually
defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all the
required dependencies manually - but it's tedious and there is always a
possibility that an unwanted dependency will sneak in through some other
package. To completely override such a package you can use
<varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesNgGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>
<section xml:id="sec-weechat"> <section xml:id="sec-weechat">
<title>Weechat</title> <title>Weechat</title>
@ -762,64 +456,6 @@ stdenv.mkDerivation {
}</programlisting> }</programlisting>
</para> </para>
</section> </section>
<section xml:id="sec-citrix">
<title>Citrix Receiver</title>
<para>
The <link xlink:href="https://www.citrix.com/products/receiver/">Citrix
Receiver</link> is a remote desktop viewer which provides access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link>
installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the licenses
agreements of the vendor need to be accepted first. This is available at
the
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">download
page at citrix.com</link>. Then run <literal>nix-prefetch-url
file://$PWD/linuxx64-$version.tar.gz</literal>. With the archive available
in the store the package can be built and installed with Nix.
</para>
<para>
<emphasis>Note: it's recommended to install <literal>Citrix
Receiver</literal> using <literal>nix-env -i</literal> or globally to
ensure that the <literal>.desktop</literal> files are installed properly
into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to
open <literal>.ica</literal> files automatically from the browser to start
a Citrix connection.</emphasis>
</para>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> in <literal>nixpkgs</literal> trusts
several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the
Mozilla database</link> by default. However several companies using Citrix
might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In
order to work around this issue the package provides a simple mechanism to
add custom certificates without rebuilding the entire package using
<literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
<section xml:id="sec-ibus-typing-booster"> <section xml:id="sec-ibus-typing-booster">
<title>ibus-engines.typing-booster</title> <title>ibus-engines.typing-booster</title>
@ -858,7 +494,7 @@ citrix_receiver.override {
<para> <para>
The IBus engine is based on <literal>hunspell</literal> to support The IBus engine is based on <literal>hunspell</literal> to support
completion in many languages. By default the dictionaries completion in many languages. By default the dictionaries
<literal>de-de</literal>, <literal>en-us</literal>, <literal>de-de</literal>, <literal>en-us</literal>, <literal>fr-moderne</literal>
<literal>es-es</literal>, <literal>it-it</literal>, <literal>es-es</literal>, <literal>it-it</literal>,
<literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add <literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add
another dictionary, the package can be overridden like this: another dictionary, the package can be overridden like this:
@ -891,30 +527,51 @@ citrix_receiver.override {
</para> </para>
</section> </section>
</section> </section>
<section xml:id="dlib"> <section xml:id="sec-nginx">
<title>DLib</title> <title>Nginx</title>
<para> <para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which <link xlink:href="https://nginx.org/">Nginx</link> is a
provides several machine learning algorithms. reverse proxy and lightweight webserver.
</para> </para>
<section xml:id="compiling-without-avx-support"> <section xml:id="sec-nginx-etag">
<title>Compiling without AVX support</title> <title>ETags on static files served from the Nix store</title>
<para> <para>
Especially older CPUs don't support HTTP has a couple different mechanisms for caching to prevent
<link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link> clients from having to download the same content repeatedly
(<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to if a resource has not changed since the last time it was requested.
optimize their algorithms. When nginx is used as a server for static files, it implements
the caching mechanism based on the
<link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><literal>Last-Modified</literal></link>
response header automatically; unfortunately, it works by using
filesystem timestamps to determine the value of the
<literal>Last-Modified</literal> header. This doesn't give the
desired behavior when the file is in the Nix store, because all
file timestamps are set to 0 (for reasons related to build
reproducibility).
</para> </para>
<para> <para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur. Fortunately, HTTP supports an alternative (and more effective)
In those cases AVX support needs to be disabled: caching mechanism: the
<programlisting>self: super: { <link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag"><literal>ETag</literal></link>
dlib = super.dlib.override { avxSupport = false; }; response header. The value of the <literal>ETag</literal> header
}</programlisting> specifies some identifier for the particular content that the
server is sending (e.g. a hash). When a client makes a second
request for the same resource, it sends that value back in an
<literal>If-None-Match</literal> header. If the ETag value is
unchanged, then the server does not need to resend the content.
</para>
<para>
As of NixOS 19.09, the nginx package in Nixpkgs is patched such
that when nginx serves a file out of <filename>/nix/store</filename>,
the hash in the store path is used as the <literal>ETag</literal>
header in the HTTP response, thus providing proper caching functionality.
This happens automatically; you do not need to do modify any
configuration to get this behavior.
</para> </para>
</section> </section>
</section> </section>

View File

@ -0,0 +1,469 @@
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="package-specific-user-notes">
<title>Package-specific usage notes</title>
<para>
These chapters includes some notes
that apply to specific packages and should
answer some of the frequently asked questions
related to Nixpkgs use.
Some useful information related to package use
can be found in <link linkend="chap-package-notes">package-specific development notes</link>.
</para>
<section xml:id="opengl">
<title>OpenGL</title>
<para>
Packages that use OpenGL have NixOS desktop as their primary target. The
current solution for loading the GPU-specific drivers is based on
<literal>libglvnd</literal> and looks for the driver implementation in
<literal>LD_LIBRARY_PATH</literal>. If you are using a non-NixOS
GNU/Linux/X11 desktop with free software video drivers, consider launching
OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of
<literal>libglvnd</literal> and <literal>mesa_drivers</literal> in
<literal>LD_LIBRARY_PATH</literal>. For proprietary video drivers you might
have luck with also adding the corresponding video driver package.
</para>
</section>
<section xml:id="locales">
<title>Locales</title>
<para>
To allow simultaneous use of packages linked against different versions of
<literal>glibc</literal> with different locale archive formats Nixpkgs
patches <literal>glibc</literal> to rely on
<literal>LOCALE_ARCHIVE</literal> environment variable.
</para>
<para>
On non-NixOS distributions this variable is obviously not set. This can
cause regressions in language support or even crashes in some
Nixpkgs-provided programs. The simplest way to mitigate this problem is
exporting the <literal>LOCALE_ARCHIVE</literal> variable pointing to
<literal>${glibcLocales}/lib/locale/locale-archive</literal>. The drawback
(and the reason this is not the default) is the relatively large (a hundred
MiB) size of the full set of locales. It is possible to build a custom set
of locales by overriding parameters <literal>allLocales</literal> and
<literal>locales</literal> of the package.
</para>
</section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install that
packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages. It will
not <literal>configure</literal> them for us. To do this, we need to
provide a configuration file. Luckily, it is possible to do this from
within Nix! By modifying the above example, we can make Emacs load a custom
config file. The key is to create a package that provide a
<filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load this
file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to
the user's presonal config. You can always disable it by passing
<command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this
package set has some priorities imposed on packages (with the lowest
priority assigned to Melpa Unstable, and the highest for packages manually
defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all the
required dependencies manually - but it's tedious and there is always a
possibility that an unwanted dependency will sneak in through some other
package. To completely override such a package you can use
<varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesNgGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>
<section xml:id="dlib">
<title>DLib</title>
<para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which
provides several machine learning algorithms.
</para>
<section xml:id="compiling-without-avx-support">
<title>Compiling without AVX support</title>
<para>
Especially older CPUs don't support
<link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link>
(<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to
optimize their algorithms.
</para>
<para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur.
In those cases AVX support needs to be disabled:
<programlisting>self: super: {
dlib = super.dlib.override { avxSupport = false; };
}</programlisting>
</para>
</section>
</section>
<section xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and
developers) of Nixpkgs want to limit and tightly control their exposure to
unfree software. At the same time, many users need (or want)
to run some specific
pieces of proprietary software. Nixpkgs includes some expressions for unfree
software packages. By default unfree software cannot be installed and
doesnt show up in searches. To allow installing unfree software in a
single Nix invocation one can export
<literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users
can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining
<literal>allowUnfreePredicate</literal> function in config; it takes the
<literal>mkDerivation</literal> parameter attrset and returns
<literal>true</literal> for unfree packages that should be allowed.
</para>
</section>
<section xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as
an i686 package (the amd64 package only has documentation). When unpacked,
it has a script called <filename>steam</filename> that in Ubuntu (their
target distro) would go to <filename>/usr/bin </filename>. When run for the
first time, this script copies some files to the user's home, which include
another script that is the ultimate responsible for launching the steam
binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point
there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as
it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible
chroot environment, as documented
<link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>.
This allows us to have binaries in the expected paths without disrupting
the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration.
To use the Steam controller or other Steam supported controllers such as
the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS
17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that
conflics with the one dynamically loaded by radeonsi_dri.so. If you
get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this
pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message
like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other
linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>
<section xml:id="sec-citrix">
<title>Citrix Receiver</title>
<para>
The <link xlink:href="https://www.citrix.com/products/receiver/">Citrix
Receiver</link> is a remote desktop viewer which provides access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link>
installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the license
agreements of the vendor need to be accepted first. This is available at
the
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">download
page at citrix.com</link>. Then run <literal>nix-prefetch-url
file://$PWD/linuxx64-$version.tar.gz</literal>. With the archive available
in the store the package can be built and installed with Nix.
</para>
<para>
<emphasis>Note: it's recommended to install <literal>Citrix
Receiver</literal> using <literal>nix-env -i</literal> or globally to
ensure that the <literal>.desktop</literal> files are installed properly
into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to
open <literal>.ica</literal> files automatically from the browser to start
a Citrix connection.</emphasis>
</para>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> in <literal>nixpkgs</literal> trusts
several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the
Mozilla database</link> by default. However several companies using Citrix
might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In
order to work around this issue the package provides a simple mechanism to
add custom certificates without rebuilding the entire package using
<literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
</chapter>

View File

@ -747,7 +747,8 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
latter is convenient from a build script. However, typically one only wants latter is convenient from a build script. However, typically one only wants
to <emphasis>add</emphasis> some commands to a phase, e.g. by defining to <emphasis>add</emphasis> some commands to a phase, e.g. by defining
<literal>postInstall</literal> or <literal>preFixup</literal>, as skipping <literal>postInstall</literal> or <literal>preFixup</literal>, as skipping
some of the default actions may have unexpected consequences. some of the default actions may have unexpected consequences. The default
script for each phase is defined in the file <filename>pkgs/stdenv/generic/setup.sh</filename>.
</para> </para>
<section xml:id="ssec-controlling-phases"> <section xml:id="ssec-controlling-phases">
@ -1579,7 +1580,7 @@ installTargets = "install-bin install-doc";</programlisting>
</term> </term>
<listitem> <listitem>
<para> <para>
Like <varname>dontStripHost</varname>, but only affects the Like <varname>dontStrip</varname>, but only affects the
<command>strip</command> command targetting the package's host platform. <command>strip</command> command targetting the package's host platform.
Useful when supporting cross compilation, but otherwise feel free to Useful when supporting cross compilation, but otherwise feel free to
ignore. ignore.
@ -1592,7 +1593,7 @@ installTargets = "install-bin install-doc";</programlisting>
</term> </term>
<listitem> <listitem>
<para> <para>
Like <varname>dontStripHost</varname>, but only affects the Like <varname>dontStrip</varname>, but only affects the
<command>strip</command> command targetting the packages' target <command>strip</command> command targetting the packages' target
platform. Useful when supporting cross compilation, but otherwise feel platform. Useful when supporting cross compilation, but otherwise feel
free to ignore. free to ignore.

View File

@ -178,7 +178,7 @@ rec {
toPlist = {}: v: let toPlist = {}: v: let
isFloat = builtins.isFloat or (x: false); isFloat = builtins.isFloat or (x: false);
expr = ind: x: with builtins; expr = ind: x: with builtins;
if isNull x then "" else if x == null then "" else
if isBool x then bool ind x else if isBool x then bool ind x else
if isInt x then int ind x else if isInt x then int ind x else
if isString x then str ind x else if isString x then str ind x else

View File

@ -83,7 +83,7 @@ rec {
# Sometimes git stores the commitId directly in the file but # Sometimes git stores the commitId directly in the file but
# sometimes it stores something like: «ref: refs/heads/branch-name» # sometimes it stores something like: «ref: refs/heads/branch-name»
matchRef = match "^ref: (.*)$" fileContent; matchRef = match "^ref: (.*)$" fileContent;
in if isNull matchRef in if matchRef == null
then fileContent then fileContent
else readCommitFromFile (lib.head matchRef) path else readCommitFromFile (lib.head matchRef) path
# Sometimes, the file isn't there at all and has been packed away in the # Sometimes, the file isn't there at all and has been packed away in the
@ -92,7 +92,7 @@ rec {
then then
let fileContent = readFile packedRefsName; let fileContent = readFile packedRefsName;
matchRef = match (".*\n([^\n ]*) " + file + "\n.*") fileContent; matchRef = match (".*\n([^\n ]*) " + file + "\n.*") fileContent;
in if isNull matchRef in if matchRef == null
then throw ("Could not find " + file + " in " + packedRefsName) then throw ("Could not find " + file + " in " + packedRefsName)
else lib.head matchRef else lib.head matchRef
else throw ("Not a .git directory: " + path); else throw ("Not a .git directory: " + path);

View File

@ -112,7 +112,7 @@ rec {
# Function to call # Function to call
f: f:
# Argument to check for null before passing it to `f` # Argument to check for null before passing it to `f`
a: if isNull a then a else f a; a: if a == null then a else f a;
# Pull in some builtins not included elsewhere. # Pull in some builtins not included elsewhere.
inherit (builtins) inherit (builtins)

View File

@ -375,6 +375,11 @@
github = "ankhers"; github = "ankhers";
name = "Justin Wood"; name = "Justin Wood";
}; };
anpryl = {
email = "anpryl@gmail.com";
github = "anpryl";
name = "Anatolii Prylutskyi";
};
anton-dessiatov = { anton-dessiatov = {
email = "anton.dessiatov@gmail.com"; email = "anton.dessiatov@gmail.com";
github = "anton-dessiatov"; github = "anton-dessiatov";
@ -762,6 +767,11 @@
github = "brian-dawn"; github = "brian-dawn";
name = "Brian Dawn"; name = "Brian Dawn";
}; };
brianhicks = {
email = "brian@brianthicks.com";
github = "BrianHicks";
name = "Brian Hicks";
};
bricewge = { bricewge = {
email = "bricewge@gmail.com"; email = "bricewge@gmail.com";
github = "bricewge"; github = "bricewge";
@ -876,6 +886,11 @@
github = "ceedubs"; github = "ceedubs";
name = "Cody Allen"; name = "Cody Allen";
}; };
cf6b88f = {
email = "elmo.todurov@eesti.ee";
github = "cf6b88f";
name = "Elmo Todurov";
};
cfouche = { cfouche = {
email = "chaddai.fouche@gmail.com"; email = "chaddai.fouche@gmail.com";
github = "Chaddai"; github = "Chaddai";
@ -1354,9 +1369,13 @@
name = "David Sferruzza"; name = "David Sferruzza";
}; };
dtzWill = { dtzWill = {
email = "nix@wdtz.org"; email = "w@wdtz.org";
github = "dtzWill"; github = "dtzWill";
name = "Will Dietz"; name = "Will Dietz";
keys = [{
longkeyid = "rsa4096/0xFD42C7D0D41494C8";
fingerprint = "389A 78CB CD88 5E0C 4701 DEB9 FD42 C7D0 D414 94C8";
}];
}; };
dxf = { dxf = {
email = "dingxiangfei2009@gmail.com"; email = "dingxiangfei2009@gmail.com";
@ -1737,6 +1756,13 @@
github = "fps"; github = "fps";
name = "Florian Paul Schmidt"; name = "Florian Paul Schmidt";
}; };
fragamus = {
email = "innovative.engineer@gmail.com";
github = "fragamus";
name = "Michael Gough";
};
fredeb = { fredeb = {
email = "im@fredeb.dev"; email = "im@fredeb.dev";
github = "fredeeb"; github = "fredeeb";
@ -2470,6 +2496,11 @@
github = "jtojnar"; github = "jtojnar";
name = "Jan Tojnar"; name = "Jan Tojnar";
}; };
juaningan = {
email = "juaningan@gmail.com";
github = "juaningan";
name = "Juan Rodal";
};
juliendehos = { juliendehos = {
email = "dehos@lisic.univ-littoral.fr"; email = "dehos@lisic.univ-littoral.fr";
github = "juliendehos"; github = "juliendehos";
@ -3989,6 +4020,11 @@
github = "Ptival"; github = "Ptival";
name = "Valentin Robert"; name = "Valentin Robert";
}; };
ptrhlm = {
email = "ptrhlm0@gmail.com";
github = "ptrhlm";
name = "Piotr Halama";
};
puffnfresh = { puffnfresh = {
email = "brian@brianmckenna.org"; email = "brian@brianmckenna.org";
github = "puffnfresh"; github = "puffnfresh";
@ -4356,6 +4392,11 @@
github = "samdroid-apps"; github = "samdroid-apps";
name = "Sam Parkinson"; name = "Sam Parkinson";
}; };
samrose = {
email = "samuel.rose@gmail.com";
github = "samrose";
name = "Sam Rose";
};
samueldr = { samueldr = {
email = "samuel@dionne-riel.com"; email = "samuel@dionne-riel.com";
github = "samueldr"; github = "samueldr";
@ -4376,6 +4417,11 @@
github = "sargon"; github = "sargon";
name = "Daniel Ehlers"; name = "Daniel Ehlers";
}; };
saschagrunert = {
email = "mail@saschagrunert.de";
github = "saschagrunert";
name = "Sascha Grunert";
};
sauyon = { sauyon = {
email = "s@uyon.co"; email = "s@uyon.co";
github = "sauyon"; github = "sauyon";
@ -4479,6 +4525,11 @@
github = "sfrijters"; github = "sfrijters";
name = "Stefan Frijters"; name = "Stefan Frijters";
}; };
sgraf = {
email = "sgraf1337@gmail.com";
github = "sgraf812";
name = "Sebastian Graf";
};
shanemikel = { shanemikel = {
email = "shanemikel1@gmail.com"; email = "shanemikel1@gmail.com";
github = "shanemikel"; github = "shanemikel";

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0" version="5.0"
xml:id="sec-release-19.03"> xml:id="sec-release-19.03">
<title>Release 19.03 (“Koi”, 2019/03/??)</title> <title>Release 19.03 (“Koi”, 2019/04/11)</title>
<section xmlns="http://docbook.org/ns/docbook" <section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
@ -18,6 +18,11 @@
</para> </para>
<itemizedlist> <itemizedlist>
<listitem>
<para>
End of support is planned for end of October 2019, handing over to 19.09.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The default Python 3 interpreter is now CPython 3.7 instead of CPython The default Python 3 interpreter is now CPython 3.7 instead of CPython

View File

@ -19,7 +19,9 @@
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para /> <para>
End of support is planned for end of April 2020, handing over to 20.03.
</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -154,6 +156,12 @@
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</listitem> </listitem>
<listitem>
<para>
The <literal>hunspellDicts.fr-any</literal> dictionary now ships with <literal>fr_FR.{aff,dic}</literal>
which is linked to <literal>fr-toutesvariantes.{aff,dic}</literal>.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
</section> </section>

View File

@ -145,7 +145,7 @@ let
displayOptionsGraph = displayOptionsGraph =
let let
checkList = checkList =
if !(isNull testOption) then [ testOption ] if testOption != null then [ testOption ]
else testOptions; else testOptions;
checkAll = checkList == []; checkAll = checkList == [];
in in

View File

@ -31,7 +31,7 @@ let
# use latest when no version is passed # use latest when no version is passed
makeCacheConf = { version ? null }: makeCacheConf = { version ? null }:
let let
fcPackage = if builtins.isNull version fcPackage = if version == null
then "fontconfig" then "fontconfig"
else "fontconfig_${version}"; else "fontconfig_${version}";
makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; }; makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; };

View File

@ -46,7 +46,7 @@ let cfg = config.fonts.fontconfig;
# use latest when no version is passed # use latest when no version is passed
makeCacheConf = { version ? null }: makeCacheConf = { version ? null }:
let let
fcPackage = if builtins.isNull version fcPackage = if version == null
then "fontconfig" then "fontconfig"
else "fontconfig_${version}"; else "fontconfig_${version}";
makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; }; makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; };

View File

@ -8,7 +8,7 @@ let
name = "sysctl option value"; name = "sysctl option value";
check = val: check = val:
let let
checkType = x: isBool x || isString x || isInt x || isNull x; checkType = x: isBool x || isString x || isInt x || x == null;
in in
checkType val || (val._type or "" == "override" && checkType val.content); checkType val || (val._type or "" == "override" && checkType val.content);
merge = loc: defs: mergeOneOption loc (filterOverrides defs); merge = loc: defs: mergeOneOption loc (filterOverrides defs);

View File

@ -63,8 +63,7 @@ in {
b43Firmware_5_1_138 b43Firmware_5_1_138
b43Firmware_6_30_163_46 b43Firmware_6_30_163_46
b43FirmwareCutter b43FirmwareCutter
facetimehd-firmware ] ++ optional (pkgs.stdenv.hostPlatform.isi686 || pkgs.stdenv.hostPlatform.isx86_64) facetimehd-firmware;
];
}) })
]; ];
} }

View File

@ -198,7 +198,7 @@ let
fi fi
${ # When there is a theme configured, use it, otherwise use the background image. ${ # When there is a theme configured, use it, otherwise use the background image.
if (!isNull config.isoImage.grubTheme) then '' if config.isoImage.grubTheme != null then ''
# Sets theme. # Sets theme.
set theme=(hd0)/EFI/boot/grub-theme/theme.txt set theme=(hd0)/EFI/boot/grub-theme/theme.txt
# Load theme fonts # Load theme fonts
@ -622,7 +622,7 @@ in
{ source = "${pkgs.memtest86plus}/memtest.bin"; { source = "${pkgs.memtest86plus}/memtest.bin";
target = "/boot/memtest.bin"; target = "/boot/memtest.bin";
} }
] ++ optionals (!isNull config.isoImage.grubTheme) [ ] ++ optionals (config.isoImage.grubTheme != null) [
{ source = config.isoImage.grubTheme; { source = config.isoImage.grubTheme;
target = "/EFI/boot/grub-theme"; target = "/EFI/boot/grub-theme";
} }

View File

@ -464,6 +464,21 @@ EOF
} }
} }
# For lack of a better way to determine it, guess whether we should use a
# bigger font for the console from the display mode on the first
# framebuffer. A way based on the physical size/actual DPI reported by
# the monitor would be nice, but I don't know how to do this without X :)
my $fb_modes_file = "/sys/class/graphics/fb0/modes";
if (-f $fb_modes_file && -r $fb_modes_file) {
my $modes = read_file($fb_modes_file);
$modes =~ m/([0-9]+)x([0-9]+)/;
my $console_width = $1, my $console_height = $2;
if ($console_width > 1920) {
push @attrs, "# High-DPI console";
push @attrs, 'i18n.consoleFont = lib.mkDefault "${pkgs.terminus_font}/share/consolefonts/ter-u28n.psf.gz";';
}
}
# Generate the hardware configuration file. # Generate the hardware configuration file.

View File

@ -36,7 +36,7 @@ let
nixos-generate-config = makeProg { nixos-generate-config = makeProg {
name = "nixos-generate-config"; name = "nixos-generate-config";
src = ./nixos-generate-config.pl; src = ./nixos-generate-config.pl;
path = [ pkgs.btrfs-progs ]; path = lib.optionals (lib.elem "btrfs" config.boot.supportedFilesystems) [ pkgs.btrfs-progs ];
perl = "${pkgs.perl}/bin/perl -I${pkgs.perlPackages.FileSlurp}/${pkgs.perl.libPrefix}"; perl = "${pkgs.perl}/bin/perl -I${pkgs.perlPackages.FileSlurp}/${pkgs.perl.libPrefix}";
inherit (config.system.nixos) release; inherit (config.system.nixos) release;
}; };

View File

@ -265,7 +265,7 @@
syncthing = 237; syncthing = 237;
caddy = 239; caddy = 239;
taskd = 240; taskd = 240;
factorio = 241; # factorio = 241; # DynamicUser = true
# emby = 242; # unusued, removed 2019-05-01 # emby = 242; # unusued, removed 2019-05-01
graylog = 243; graylog = 243;
sniproxy = 244; sniproxy = 244;
@ -567,7 +567,7 @@
syncthing = 237; syncthing = 237;
caddy = 239; caddy = 239;
taskd = 240; taskd = 240;
factorio = 241; # factorio = 241; # unused
# emby = 242; # unused, removed 2019-05-01 # emby = 242; # unused, removed 2019-05-01
sniproxy = 244; sniproxy = 244;
nzbget = 245; nzbget = 245;

View File

@ -536,6 +536,7 @@
./services/networking/avahi-daemon.nix ./services/networking/avahi-daemon.nix
./services/networking/babeld.nix ./services/networking/babeld.nix
./services/networking/bind.nix ./services/networking/bind.nix
./services/networking/bitcoind.nix
./services/networking/autossh.nix ./services/networking/autossh.nix
./services/networking/bird.nix ./services/networking/bird.nix
./services/networking/bitlbee.nix ./services/networking/bitlbee.nix
@ -793,7 +794,6 @@
./services/web-servers/traefik.nix ./services/web-servers/traefik.nix
./services/web-servers/uwsgi.nix ./services/web-servers/uwsgi.nix
./services/web-servers/varnish/default.nix ./services/web-servers/varnish/default.nix
./services/web-servers/winstone.nix
./services/web-servers/zope2.nix ./services/web-servers/zope2.nix
./services/x11/colord.nix ./services/x11/colord.nix
./services/x11/compton.nix ./services/x11/compton.nix

View File

@ -34,7 +34,7 @@ let
bashAliases = concatStringsSep "\n" ( bashAliases = concatStringsSep "\n" (
mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}") mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}")
(filterAttrs (k: v: !isNull v) cfg.shellAliases) (filterAttrs (k: v: v != null) cfg.shellAliases)
); );
in in

View File

@ -10,7 +10,7 @@ let
fishAliases = concatStringsSep "\n" ( fishAliases = concatStringsSep "\n" (
mapAttrsFlatten (k: v: "alias ${k} ${escapeShellArg v}") mapAttrsFlatten (k: v: "alias ${k} ${escapeShellArg v}")
(filterAttrs (k: v: !isNull v) cfg.shellAliases) (filterAttrs (k: v: v != null) cfg.shellAliases)
); );
in in

View File

@ -148,7 +148,7 @@ in
UseSTARTTLS=${yesNo cfg.useSTARTTLS} UseSTARTTLS=${yesNo cfg.useSTARTTLS}
#Debug=YES #Debug=YES
${optionalString (cfg.authUser != "") "AuthUser=${cfg.authUser}"} ${optionalString (cfg.authUser != "") "AuthUser=${cfg.authUser}"}
${optionalString (!isNull cfg.authPassFile) "AuthPassFile=${cfg.authPassFile}"} ${optionalString (cfg.authPassFile != null) "AuthPassFile=${cfg.authPassFile}"}
''; '';
environment.systemPackages = [pkgs.ssmtp]; environment.systemPackages = [pkgs.ssmtp];

View File

@ -12,7 +12,7 @@ let
zshAliases = concatStringsSep "\n" ( zshAliases = concatStringsSep "\n" (
mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}") mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}")
(filterAttrs (k: v: !isNull v) cfg.shellAliases) (filterAttrs (k: v: v != null) cfg.shellAliases)
); );
in in

View File

@ -210,6 +210,7 @@ with lib;
(mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.") (mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.")
(mkRemovedOptionModule [ "services" "logstash" "enableWeb" ] "The web interface was removed from logstash") (mkRemovedOptionModule [ "services" "logstash" "enableWeb" ] "The web interface was removed from logstash")
(mkRemovedOptionModule [ "boot" "zfs" "enableLegacyCrypto" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "boot" "zfs" "enableLegacyCrypto" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "winstone" ] "The corresponding package was removed from nixpkgs.")
# ZSH # ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ]) (mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])

View File

@ -248,7 +248,7 @@ let
cfg = config.services.znapzend; cfg = config.services.znapzend;
onOff = b: if b then "on" else "off"; onOff = b: if b then "on" else "off";
nullOff = b: if isNull b then "off" else toString b; nullOff = b: if b == null then "off" else toString b;
stripSlashes = replaceStrings [ "/" ] [ "." ]; stripSlashes = replaceStrings [ "/" ] [ "." ];
attrsToFile = config: concatStringsSep "\n" (builtins.attrValues ( attrsToFile = config: concatStringsSep "\n" (builtins.attrValues (
@ -256,7 +256,7 @@ let
mkDestAttrs = dst: with dst; mkDestAttrs = dst: with dst;
mapAttrs' (n: v: nameValuePair "dst_${label}${n}" v) ({ mapAttrs' (n: v: nameValuePair "dst_${label}${n}" v) ({
"" = optionalString (! isNull host) "${host}:" + dataset; "" = optionalString (host != null) "${host}:" + dataset;
_plan = plan; _plan = plan;
} // optionalAttrs (presend != null) { } // optionalAttrs (presend != null) {
_precmd = presend; _precmd = presend;

View File

@ -3,7 +3,7 @@
with lib; with lib;
let let
version = "1.3.1"; version = "1.5.0";
cfg = config.services.kubernetes.addons.dns; cfg = config.services.kubernetes.addons.dns;
ports = { ports = {
dns = 10053; dns = 10053;
@ -55,9 +55,9 @@ in {
type = types.attrs; type = types.attrs;
default = { default = {
imageName = "coredns/coredns"; imageName = "coredns/coredns";
imageDigest = "sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4"; imageDigest = "sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c";
finalImageTag = version; finalImageTag = version;
sha256 = "0vbylgyxv2jm2mnzk6f28jbsj305zsxmx3jr6ngjq461czcl5fi5"; sha256 = "15sbmhrxjxidj0j0cccn1qxpg6al175w43m6ngspl0mc132zqc9q";
}; };
}; };
}; };
@ -160,7 +160,7 @@ in {
fallthrough in-addr.arpa ip6.arpa fallthrough in-addr.arpa ip6.arpa
} }
prometheus :${toString ports.metrics} prometheus :${toString ports.metrics}
proxy . /etc/resolv.conf forward . /etc/resolv.conf
cache 30 cache 30
loop loop
reload reload

View File

@ -184,6 +184,12 @@ in
type = bool; type = bool;
}; };
preferredAddressTypes = mkOption {
description = "List of the preferred NodeAddressTypes to use for kubelet connections.";
type = nullOr str;
default = null;
};
proxyClientCertFile = mkOption { proxyClientCertFile = mkOption {
description = "Client certificate to use for connections to proxy."; description = "Client certificate to use for connections to proxy.";
default = null; default = null;
@ -349,6 +355,8 @@ in
"--kubelet-client-certificate=${cfg.kubeletClientCertFile}"} \ "--kubelet-client-certificate=${cfg.kubeletClientCertFile}"} \
${optionalString (cfg.kubeletClientKeyFile != null) ${optionalString (cfg.kubeletClientKeyFile != null)
"--kubelet-client-key=${cfg.kubeletClientKeyFile}"} \ "--kubelet-client-key=${cfg.kubeletClientKeyFile}"} \
${optionalString (cfg.preferredAddressTypes != null)
"--kubelet-preferred-address-types=${cfg.preferredAddressTypes}"} \
${optionalString (cfg.proxyClientCertFile != null) ${optionalString (cfg.proxyClientCertFile != null)
"--proxy-client-cert-file=${cfg.proxyClientCertFile}"} \ "--proxy-client-cert-file=${cfg.proxyClientCertFile}"} \
${optionalString (cfg.proxyClientKeyFile != null) ${optionalString (cfg.proxyClientKeyFile != null)

View File

@ -7,9 +7,9 @@ let
cfg = top.kubelet; cfg = top.kubelet;
cniConfig = cniConfig =
if cfg.cni.config != [] && !(isNull cfg.cni.configDir) then if cfg.cni.config != [] && cfg.cni.configDir != null then
throw "Verbatim CNI-config and CNI configDir cannot both be set." throw "Verbatim CNI-config and CNI configDir cannot both be set."
else if !(isNull cfg.cni.configDir) then else if cfg.cni.configDir != null then
cfg.cni.configDir cfg.cni.configDir
else else
(pkgs.buildEnv { (pkgs.buildEnv {
@ -373,7 +373,7 @@ in
boot.kernelModules = ["br_netfilter"]; boot.kernelModules = ["br_netfilter"];
services.kubernetes.kubelet.hostname = with config.networking; services.kubernetes.kubelet.hostname = with config.networking;
mkDefault (hostName + optionalString (!isNull domain) ".${domain}"); mkDefault (hostName + optionalString (domain != null) ".${domain}");
services.kubernetes.pki.certs = with top.lib; { services.kubernetes.pki.certs = with top.lib; {
kubelet = mkCert { kubelet = mkCert {

View File

@ -285,7 +285,7 @@ in
}; };
}; };
environment.etc.${cfg.etcClusterAdminKubeconfig}.source = mkIf (!isNull cfg.etcClusterAdminKubeconfig) environment.etc.${cfg.etcClusterAdminKubeconfig}.source = mkIf (cfg.etcClusterAdminKubeconfig != null)
(top.lib.mkKubeConfig "cluster-admin" clusterAdminKubeconfig); (top.lib.mkKubeConfig "cluster-admin" clusterAdminKubeconfig);
environment.systemPackages = mkIf (top.kubelet.enable || top.proxy.enable) [ environment.systemPackages = mkIf (top.kubelet.enable || top.proxy.enable) [

View File

@ -236,7 +236,7 @@ in
}; };
assertions = [ assertions = [
{ assertion = cfg.hooksPath == hooksDir || all isNull (attrValues cfg.hooks); { assertion = cfg.hooksPath == hooksDir || all (v: v == null) (attrValues cfg.hooks);
message = '' message = ''
Options `services.buildkite-agent.hooksPath' and Options `services.buildkite-agent.hooksPath' and
`services.buildkite-agent.hooks.<name>' are mutually exclusive. `services.buildkite-agent.hooks.<name>' are mutually exclusive.

View File

@ -189,7 +189,7 @@ in {
preStart = preStart =
let replacePlugins = let replacePlugins =
if isNull cfg.plugins if cfg.plugins == null
then "" then ""
else else
let pluginCmds = lib.attrsets.mapAttrsToList let pluginCmds = lib.attrsets.mapAttrsToList

View File

@ -22,11 +22,11 @@ let
else {}) else {})
); );
cassandraConfigWithAddresses = cassandraConfig // cassandraConfigWithAddresses = cassandraConfig //
( if isNull cfg.listenAddress ( if cfg.listenAddress == null
then { listen_interface = cfg.listenInterface; } then { listen_interface = cfg.listenInterface; }
else { listen_address = cfg.listenAddress; } else { listen_address = cfg.listenAddress; }
) // ( ) // (
if isNull cfg.rpcAddress if cfg.rpcAddress == null
then { rpc_interface = cfg.rpcInterface; } then { rpc_interface = cfg.rpcInterface; }
else { rpc_address = cfg.rpcAddress; } else { rpc_address = cfg.rpcAddress; }
); );
@ -219,19 +219,13 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions = assertions =
[ { assertion = [ { assertion =
((isNull cfg.listenAddress) (cfg.listenAddress == null || cfg.listenInterface == null)
|| (isNull cfg.listenInterface) && !(cfg.listenAddress == null && cfg.listenInterface == null);
) && !((isNull cfg.listenAddress)
&& (isNull cfg.listenInterface)
);
message = "You have to set either listenAddress or listenInterface"; message = "You have to set either listenAddress or listenInterface";
} }
{ assertion = { assertion =
((isNull cfg.rpcAddress) (cfg.rpcAddress == null || cfg.rpcInterface == null)
|| (isNull cfg.rpcInterface) && !(cfg.rpcAddress == null && cfg.rpcInterface == null);
) && !((isNull cfg.rpcAddress)
&& (isNull cfg.rpcInterface)
);
message = "You have to set either rpcAddress or rpcInterface"; message = "You have to set either rpcAddress or rpcInterface";
} }
]; ];
@ -276,7 +270,7 @@ in {
}; };
}; };
systemd.timers.cassandra-full-repair = systemd.timers.cassandra-full-repair =
mkIf (!isNull cfg.fullRepairInterval) { mkIf (cfg.fullRepairInterval != null) {
description = "Schedule full repairs on Cassandra"; description = "Schedule full repairs on Cassandra";
wantedBy = [ "timers.target" ]; wantedBy = [ "timers.target" ];
timerConfig = timerConfig =
@ -300,7 +294,7 @@ in {
}; };
}; };
systemd.timers.cassandra-incremental-repair = systemd.timers.cassandra-incremental-repair =
mkIf (!isNull cfg.incrementalRepairInterval) { mkIf (cfg.incrementalRepairInterval != null) {
description = "Schedule incremental repairs on Cassandra"; description = "Schedule incremental repairs on Cassandra";
wantedBy = [ "timers.target" ]; wantedBy = [ "timers.target" ];
timerConfig = timerConfig =

View File

@ -7,7 +7,7 @@ let
crdb = cfg.package; crdb = cfg.package;
escape = builtins.replaceStrings ["%"] ["%%"]; escape = builtins.replaceStrings ["%"] ["%%"];
ifNotNull = v: s: optionalString (!isNull v) s; ifNotNull = v: s: optionalString (v != null) s;
startupCommand = lib.concatStringsSep " " startupCommand = lib.concatStringsSep " "
[ # Basic startup [ # Basic startup
@ -164,7 +164,7 @@ in
config = mkIf config.services.cockroachdb.enable { config = mkIf config.services.cockroachdb.enable {
assertions = [ assertions = [
{ assertion = !cfg.insecure -> !(isNull cfg.certsDir); { assertion = !cfg.insecure -> cfg.certsDir != null;
message = "CockroachDB must have a set of SSL certificates (.certsDir), or run in Insecure Mode (.insecure = true)"; message = "CockroachDB must have a set of SSL certificates (.certsDir), or run in Insecure Mode (.insecure = true)";
} }
]; ];

View File

@ -36,6 +36,10 @@ let
memory = ${cfg.memory} memory = ${cfg.memory}
storage_memory = ${cfg.storageMemory} storage_memory = ${cfg.storageMemory}
${optionalString (lib.versionAtLeast cfg.package.version "6.1") ''
trace_format = ${cfg.traceFormat}
''}
${optionalString (cfg.tls != null) '' ${optionalString (cfg.tls != null) ''
tls_plugin = ${pkg}/libexec/plugins/FDBLibTLS.so tls_plugin = ${pkg}/libexec/plugins/FDBLibTLS.so
tls_certificate_file = ${cfg.tls.certificate} tls_certificate_file = ${cfg.tls.certificate}
@ -317,9 +321,24 @@ in
default = "/run/foundationdb.pid"; default = "/run/foundationdb.pid";
description = "Path to pidfile for fdbmonitor."; description = "Path to pidfile for fdbmonitor.";
}; };
traceFormat = mkOption {
type = types.enum [ "xml" "json" ];
default = "xml";
description = "Trace logging format.";
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions = [
{ assertion = lib.versionOlder cfg.package.version "6.1" -> cfg.traceFormat == "xml";
message = ''
Versions of FoundationDB before 6.1 do not support configurable trace formats (only XML is supported).
This option has no effect for version '' + cfg.package.version + '', and enabling it is an error.
'';
}
];
environment.systemPackages = [ pkg ]; environment.systemPackages = [ pkg ];
users.users = optionalAttrs (cfg.user == "foundationdb") (singleton users.users = optionalAttrs (cfg.user == "foundationdb") (singleton
@ -382,7 +401,7 @@ in
chown -R ${cfg.user}:${cfg.group} ${cfg.pidfile} chown -R ${cfg.user}:${cfg.group} ${cfg.pidfile}
for x in "${cfg.logDir}" "${cfg.dataDir}"; do for x in "${cfg.logDir}" "${cfg.dataDir}"; do
[ ! -d "$x" ] && mkdir -m 0700 -vp "$x"; [ ! -d "$x" ] && mkdir -m 0770 -vp "$x";
chown -R ${cfg.user}:${cfg.group} "$x"; chown -R ${cfg.user}:${cfg.group} "$x";
done done
@ -404,7 +423,7 @@ in
postStart = '' postStart = ''
if [ -e "${cfg.dataDir}/.first_startup" ]; then if [ -e "${cfg.dataDir}/.first_startup" ]; then
fdbcli --exec "configure new single memory" fdbcli --exec "configure new single ssd"
rm -f "${cfg.dataDir}/.first_startup"; rm -f "${cfg.dataDir}/.first_startup";
fi fi
''; '';

View File

@ -8,12 +8,13 @@ let
mongodb = cfg.package; mongodb = cfg.package;
mongoCnf = pkgs.writeText "mongodb.conf" mongoCnf = cfg: pkgs.writeText "mongodb.conf"
'' ''
net.bindIp: ${cfg.bind_ip} net.bindIp: ${cfg.bind_ip}
${optionalString cfg.quiet "systemLog.quiet: true"} ${optionalString cfg.quiet "systemLog.quiet: true"}
systemLog.destination: syslog systemLog.destination: syslog
storage.dbPath: ${cfg.dbpath} storage.dbPath: ${cfg.dbpath}
${optionalString cfg.enableAuth "security.authorization: enabled"}
${optionalString (cfg.replSetName != "") "replication.replSetName: ${cfg.replSetName}"} ${optionalString (cfg.replSetName != "") "replication.replSetName: ${cfg.replSetName}"}
${cfg.extraConfig} ${cfg.extraConfig}
''; '';
@ -59,6 +60,18 @@ in
description = "quieter output"; description = "quieter output";
}; };
enableAuth = mkOption {
type = types.bool;
default = false;
description = "Enable client authentication. Creates a default superuser with username root!";
};
initialRootPassword = mkOption {
type = types.nullOr types.string;
default = null;
description = "Password for the root user if auth is enabled.";
};
dbpath = mkOption { dbpath = mkOption {
default = "/var/db/mongodb"; default = "/var/db/mongodb";
description = "Location where MongoDB stores its files"; description = "Location where MongoDB stores its files";
@ -84,6 +97,14 @@ in
''; '';
description = "MongoDB extra configuration in YAML format"; description = "MongoDB extra configuration in YAML format";
}; };
initialScript = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
A file containing MongoDB statements to execute on first startup.
'';
};
}; };
}; };
@ -92,6 +113,11 @@ in
###### implementation ###### implementation
config = mkIf config.services.mongodb.enable { config = mkIf config.services.mongodb.enable {
assertions = [
{ assertion = !cfg.enableAuth || cfg.initialRootPassword != null;
message = "`enableAuth` requires `initialRootPassword` to be set.";
}
];
users.users.mongodb = mkIf (cfg.user == "mongodb") users.users.mongodb = mkIf (cfg.user == "mongodb")
{ name = "mongodb"; { name = "mongodb";
@ -108,7 +134,7 @@ in
after = [ "network.target" ]; after = [ "network.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${mongodb}/bin/mongod --config ${mongoCnf} --fork --pidfilepath ${cfg.pidFile}"; ExecStart = "${mongodb}/bin/mongod --config ${mongoCnf cfg} --fork --pidfilepath ${cfg.pidFile}";
User = cfg.user; User = cfg.user;
PIDFile = cfg.pidFile; PIDFile = cfg.pidFile;
Type = "forking"; Type = "forking";
@ -116,15 +142,50 @@ in
PermissionsStartOnly = true; PermissionsStartOnly = true;
}; };
preStart = '' preStart = let
cfg_ = cfg // { enableAuth = false; bind_ip = "127.0.0.1"; };
in ''
rm ${cfg.dbpath}/mongod.lock || true rm ${cfg.dbpath}/mongod.lock || true
if ! test -e ${cfg.dbpath}; then if ! test -e ${cfg.dbpath}; then
install -d -m0700 -o ${cfg.user} ${cfg.dbpath} install -d -m0700 -o ${cfg.user} ${cfg.dbpath}
# See postStart!
touch ${cfg.dbpath}/.first_startup
fi fi
if ! test -e ${cfg.pidFile}; then if ! test -e ${cfg.pidFile}; then
install -D -o ${cfg.user} /dev/null ${cfg.pidFile} install -D -o ${cfg.user} /dev/null ${cfg.pidFile}
fi '' + lib.optionalString cfg.enableAuth ''
if ! test -e "${cfg.dbpath}/.auth_setup_complete"; then
systemd-run --unit=mongodb-for-setup --uid=${cfg.user} ${mongodb}/bin/mongod --config ${mongoCnf cfg_}
# wait for mongodb
while ! ${mongodb}/bin/mongo --eval "db.version()" > /dev/null 2>&1; do sleep 0.1; done
${mongodb}/bin/mongo <<EOF
use admin
db.createUser(
{
user: "root",
pwd: "${cfg.initialRootPassword}",
roles: [
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "dbAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" }
]
}
)
EOF
touch "${cfg.dbpath}/.auth_setup_complete"
systemctl stop mongodb-for-setup
fi fi
''; '';
postStart = ''
if test -e "${cfg.dbpath}/.first_startup"; then
${optionalString (cfg.initialScript != null) ''
${mongodb}/bin/mongo -u root -p ${cfg.initialRootPassword} admin "${cfg.initialScript}"
''}
rm -f "${cfg.dbpath}/.first_startup"
fi
'';
}; };
}; };

View File

@ -133,7 +133,7 @@ in
}; };
initialScript = mkOption { initialScript = mkOption {
type = types.nullOr types.lines; type = types.nullOr types.path;
default = null; default = null;
description = "A file containing SQL statements to be executed on the first startup. Can be used for granting certain permissions on the database"; description = "A file containing SQL statements to be executed on the first startup. Can be used for granting certain permissions on the database";
}; };
@ -360,9 +360,11 @@ in
echo "Creating initial database: ${database.name}" echo "Creating initial database: ${database.name}"
( echo 'create database `${database.name}`;' ( echo 'create database `${database.name}`;'
${optionalString (database ? "schema") '' ${optionalString (database.schema != null) ''
echo 'use `${database.name}`;' echo 'use `${database.name}`;'
# TODO: this silently falls through if database.schema does not exist,
# we should catch this somehow and exit, but can't do it here because we're in a subshell.
if [ -f "${database.schema}" ] if [ -f "${database.schema}" ]
then then
cat ${database.schema} cat ${database.schema}
@ -399,7 +401,9 @@ in
${optionalString (cfg.initialScript != null) ${optionalString (cfg.initialScript != null)
'' ''
# Execute initial script # Execute initial script
cat ${cfg.initialScript} | ${mysql}/bin/mysql -u root -N # using toString to avoid copying the file to nix store if given as path instead of string,
# as it might contain credentials
cat ${toString cfg.initialScript} | ${mysql}/bin/mysql -u root -N
''} ''}
${optionalString (cfg.rootPassword != null) ${optionalString (cfg.rootPassword != null)

View File

@ -16,7 +16,7 @@ let
super_only = ${builtins.toJSON cfg.superOnly} super_only = ${builtins.toJSON cfg.superOnly}
${optionalString (!isNull cfg.loginGroup) "login_group = ${cfg.loginGroup}"} ${optionalString (cfg.loginGroup != null) "login_group = ${cfg.loginGroup}"}
login_timeout = ${toString cfg.loginTimeout} login_timeout = ${toString cfg.loginTimeout}
@ -24,7 +24,7 @@ let
sql_root = ${cfg.sqlRoot} sql_root = ${cfg.sqlRoot}
${optionalString (!isNull cfg.tls) '' ${optionalString (cfg.tls != null) ''
tls_cert = ${cfg.tls.cert} tls_cert = ${cfg.tls.cert}
tls_key = ${cfg.tls.key} tls_key = ${cfg.tls.key}
''} ''}

View File

@ -105,6 +105,80 @@ in
''; '';
}; };
ensureDatabases = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Ensures that the specified databases exist.
This option will never delete existing databases, especially not when the value of this
option is changed. This means that databases created once through this option or
otherwise have to be removed manually.
'';
example = [
"gitea"
"nextcloud"
];
};
ensureUsers = mkOption {
type = types.listOf (types.submodule {
options = {
name = mkOption {
type = types.str;
description = ''
Name of the user to ensure.
'';
};
ensurePermissions = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
Permissions to ensure for the user, specified as an attribute set.
The attribute names specify the database and tables to grant the permissions for.
The attribute values specify the permissions to grant. You may specify one or
multiple comma-separated SQL privileges here.
For more information on how to specify the target
and on which privileges exist, see the
<link xlink:href="https://www.postgresql.org/docs/current/sql-grant.html">GRANT syntax</link>.
The attributes are used as <code>GRANT ''${attrName} ON ''${attrValue}</code>.
'';
example = literalExample ''
{
"DATABASE nextcloud" = "ALL PRIVILEGES";
"ALL TABLES IN SCHEMA public" = "ALL PRIVILEGES";
}
'';
};
};
});
default = [];
description = ''
Ensures that the specified users exist and have at least the ensured permissions.
The PostgreSQL users will be identified using peer authentication. This authenticates the Unix user with the
same name only, and that without the need for a password.
This option will never delete existing users or remove permissions, especially not when the value of this
option is changed. This means that users created and permissions assigned once through this option or
otherwise have to be removed manually.
'';
example = literalExample ''
[
{
name = "nextcloud";
ensurePermissions = {
"DATABASE nextcloud" = "ALL PRIVILEGES";
};
}
{
name = "superuser";
ensurePermissions = {
"ALL TABLES IN SCHEMA public" = "ALL PRIVILEGES";
};
}
]
'';
};
enableTCPIP = mkOption { enableTCPIP = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
@ -256,17 +330,30 @@ in
# Wait for PostgreSQL to be ready to accept connections. # Wait for PostgreSQL to be ready to accept connections.
postStart = postStart =
'' ''
while ! ${pkgs.sudo}/bin/sudo -u ${cfg.superUser} psql --port=${toString cfg.port} -d postgres -c "" 2> /dev/null; do PSQL="${pkgs.sudo}/bin/sudo -u ${cfg.superUser} psql --port=${toString cfg.port}"
while ! $PSQL -d postgres -c "" 2> /dev/null; do
if ! kill -0 "$MAINPID"; then exit 1; fi if ! kill -0 "$MAINPID"; then exit 1; fi
sleep 0.1 sleep 0.1
done done
if test -e "${cfg.dataDir}/.first_startup"; then if test -e "${cfg.dataDir}/.first_startup"; then
${optionalString (cfg.initialScript != null) '' ${optionalString (cfg.initialScript != null) ''
${pkgs.sudo}/bin/sudo -u ${cfg.superUser} psql -f "${cfg.initialScript}" --port=${toString cfg.port} -d postgres $PSQL -f "${cfg.initialScript}" -d postgres
''} ''}
rm -f "${cfg.dataDir}/.first_startup" rm -f "${cfg.dataDir}/.first_startup"
fi fi
'' + optionalString (cfg.ensureDatabases != []) ''
${concatMapStrings (database: ''
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${database}'" | grep -q 1 || $PSQL -tAc "CREATE DATABASE ${database}"
'') cfg.ensureDatabases}
'' + ''
${concatMapStrings (user: ''
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='${user.name}'" | grep -q 1 || $PSQL -tAc "CREATE USER ${user.name}"
${concatStringsSep "\n" (mapAttrsToList (database: permission: ''
$PSQL -tAc "GRANT ${permission} ON ${database} TO ${user.name}"
'') user.ensurePermissions)}
'') cfg.ensureUsers}
''; '';
unitConfig.RequiresMountsFor = "${cfg.dataDir}"; unitConfig.RequiresMountsFor = "${cfg.dataDir}";

View File

@ -7,6 +7,56 @@ with lib;
let let
# the demo agent isn't built by default, but we need it here # the demo agent isn't built by default, but we need it here
package = pkgs.geoclue2.override { withDemoAgent = config.services.geoclue2.enableDemoAgent; }; package = pkgs.geoclue2.override { withDemoAgent = config.services.geoclue2.enableDemoAgent; };
cfg = config.services.geoclue2;
defaultWhitelist = [ "gnome-shell" "io.elementary.desktop.agent-geoclue2" ];
appConfigModule = types.submodule ({ name, ... }: {
options = {
desktopID = mkOption {
type = types.str;
description = "Desktop ID of the application.";
};
isAllowed = mkOption {
type = types.bool;
default = null;
description = ''
Whether the application will be allowed access to location information.
'';
};
isSystem = mkOption {
type = types.bool;
default = null;
description = ''
Whether the application is a system component or not.
'';
};
users = mkOption {
type = types.listOf types.str;
default = [];
description = ''
List of UIDs of all users for which this application is allowed location
info access, Defaults to an empty string to allow it for all users.
'';
};
};
config.desktopID = mkDefault name;
});
appConfigToINICompatible = _: { desktopID, isAllowed, isSystem, users, ... }: {
name = desktopID;
value = {
allowed = isAllowed;
system = isSystem;
users = concatStringsSep ";" users;
};
};
in in
{ {
@ -35,23 +85,117 @@ in
''; '';
}; };
enableNmea = mkOption {
type = types.bool;
default = true;
description = ''
Whether to fetch location from NMEA sources on local network.
'';
};
enable3G = mkOption {
type = types.bool;
default = true;
description = ''
Whether to enable 3G source.
'';
};
enableCDMA = mkOption {
type = types.bool;
default = true;
description = ''
Whether to enable CDMA source.
'';
};
enableModemGPS = mkOption {
type = types.bool;
default = true;
description = ''
Whether to enable Modem-GPS source.
'';
};
enableWifi = mkOption {
type = types.bool;
default = true;
description = ''
Whether to enable WiFi source.
'';
};
geoProviderUrl = mkOption {
type = types.str;
default = "https://location.services.mozilla.com/v1/geolocate?key=geoclue";
example = "https://www.googleapis.com/geolocation/v1/geolocate?key=YOUR_KEY";
description = ''
The url to the wifi GeoLocation Service.
'';
};
submitData = mkOption {
type = types.bool;
default = false;
description = ''
Whether to submit data to a GeoLocation Service.
'';
};
submissionUrl = mkOption {
type = types.str;
default = "https://location.services.mozilla.com/v1/submit?key=geoclue";
description = ''
The url to submit data to a GeoLocation Service.
'';
};
submissionNick = mkOption {
type = types.str;
default = "geoclue";
description = ''
A nickname to submit network data with.
Must be 2-32 characters long.
'';
};
appConfig = mkOption {
type = types.loaOf appConfigModule;
default = {};
example = literalExample ''
"com.github.app" = {
isAllowed = true;
isSystem = true;
users = [ "300" ];
};
'';
description = ''
Specify extra settings per application.
'';
};
}; };
}; };
###### implementation ###### implementation
config = mkIf config.services.geoclue2.enable { config = mkIf cfg.enable {
environment.systemPackages = [ package ]; environment.systemPackages = [ package ];
services.dbus.packages = [ package ]; services.dbus.packages = [ package ];
systemd.packages = [ package ]; systemd.packages = [ package ];
# restart geoclue service when the configuration changes
systemd.services."geoclue".restartTriggers = [
config.environment.etc."geoclue/geoclue.conf".source
];
# this needs to run as a user service, since it's associated with the # this needs to run as a user service, since it's associated with the
# user who is making the requests # user who is making the requests
systemd.user.services = mkIf config.services.geoclue2.enableDemoAgent { systemd.user.services = mkIf cfg.enableDemoAgent {
"geoclue-agent" = { "geoclue-agent" = {
description = "Geoclue agent"; description = "Geoclue agent";
script = "${package}/libexec/geoclue-2.0/demos/agent"; script = "${package}/libexec/geoclue-2.0/demos/agent";
@ -62,7 +206,41 @@ in
}; };
}; };
environment.etc."geoclue/geoclue.conf".source = "${package}/etc/geoclue/geoclue.conf"; services.geoclue2.appConfig."epiphany" = {
}; isAllowed = true;
isSystem = false;
};
services.geoclue2.appConfig."firefox" = {
isAllowed = true;
isSystem = false;
};
environment.etc."geoclue/geoclue.conf".text =
generators.toINI {} ({
agent = {
whitelist = concatStringsSep ";"
(optional cfg.enableDemoAgent "geoclue-demo-agent" ++ defaultWhitelist);
};
network-nmea = {
enable = cfg.enableNmea;
};
"3g" = {
enable = cfg.enable3G;
};
cdma = {
enable = cfg.enableCDMA;
};
modem-gps = {
enable = cfg.enableModemGPS;
};
wifi = {
enable = cfg.enableWifi;
url = cfg.geoProviderUrl;
submit-data = boolToString cfg.submitData;
submission-url = cfg.submissionUrl;
submission-nick = cfg.submissionNick;
};
} // mapAttrs' appConfigToINICompatible cfg.appConfig);
};
} }

View File

@ -6,7 +6,7 @@ let
cfg = config.services.factorio; cfg = config.services.factorio;
factorio = pkgs.factorio-headless; factorio = pkgs.factorio-headless;
name = "Factorio"; name = "Factorio";
stateDir = cfg.stateDir; stateDir = "/var/lib/${cfg.stateDirName}";
mkSavePath = name: "${stateDir}/saves/${name}.zip"; mkSavePath = name: "${stateDir}/saves/${name}.zip";
configFile = pkgs.writeText "factorio.conf" '' configFile = pkgs.writeText "factorio.conf" ''
use-system-read-write-data-directories=true use-system-read-write-data-directories=true
@ -80,11 +80,11 @@ in
customizations. customizations.
''; '';
}; };
stateDir = mkOption { stateDirName = mkOption {
type = types.path; type = types.string;
default = "/var/lib/factorio"; default = "factorio";
description = '' description = ''
The server's data directory. Name of the directory under /var/lib holding the server's data.
The configuration and map will be stored here. The configuration and map will be stored here.
''; '';
@ -176,20 +176,6 @@ in
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
users = {
users.factorio = {
uid = config.ids.uids.factorio;
description = "Factorio server user";
group = "factorio";
home = stateDir;
createHome = true;
};
groups.factorio = {
gid = config.ids.gids.factorio;
};
};
systemd.services.factorio = { systemd.services.factorio = {
description = "Factorio headless server"; description = "Factorio headless server";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
@ -205,12 +191,10 @@ in
]; ];
serviceConfig = { serviceConfig = {
User = "factorio";
Group = "factorio";
Restart = "always"; Restart = "always";
KillSignal = "SIGINT"; KillSignal = "SIGINT";
WorkingDirectory = stateDir; DynamicUser = true;
PrivateTmp = true; StateDirectory = cfg.stateDirName;
UMask = "0007"; UMask = "0007";
ExecStart = toString [ ExecStart = toString [
"${factorio}/bin/factorio" "${factorio}/bin/factorio"
@ -220,6 +204,20 @@ in
"--server-settings=${serverSettingsFile}" "--server-settings=${serverSettingsFile}"
(optionalString (cfg.mods != []) "--mod-directory=${modDir}") (optionalString (cfg.mods != []) "--mod-directory=${modDir}")
]; ];
# Sandboxing
NoNewPrivileges = true;
PrivateTmp = true;
PrivateDevices = true;
ProtectSystem = "strict";
ProtectHome = true;
ProtectControlGroups = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" "AF_NETLINK" ];
RestrictRealtime = true;
RestrictNamespaces = true;
MemoryDenyWriteExecute = true;
}; };
}; };

View File

@ -215,8 +215,8 @@ in {
networking.firewall = mkIf cfg.openFirewall (if cfg.declarative then { networking.firewall = mkIf cfg.openFirewall (if cfg.declarative then {
allowedUDPPorts = [ serverPort ]; allowedUDPPorts = [ serverPort ];
allowedTCPPorts = [ serverPort ] allowedTCPPorts = [ serverPort ]
++ optional (! isNull queryPort) queryPort ++ optional (queryPort != null) queryPort
++ optional (! isNull rconPort) rconPort; ++ optional (rconPort != null) rconPort;
} else { } else {
allowedUDPPorts = [ defaultServerPort ]; allowedUDPPorts = [ defaultServerPort ];
allowedTCPPorts = [ defaultServerPort ]; allowedTCPPorts = [ defaultServerPort ];

View File

@ -227,7 +227,7 @@ in
''; '';
services.cron.systemCronJobs = services.cron.systemCronJobs =
let withTime = name: {timeArgs, ...}: ! (builtins.isNull timeArgs); let withTime = name: {timeArgs, ...}: timeArgs != null;
mkCron = name: {user, cmdline, timeArgs, ...}: '' mkCron = name: {user, cmdline, timeArgs, ...}: ''
${timeArgs} ${user} ${cmdline} ${timeArgs} ${user} ${cmdline}
''; '';

View File

@ -16,13 +16,13 @@ let
sendmail_path = /run/wrappers/bin/sendmail sendmail_path = /run/wrappers/bin/sendmail
'' ''
(if isNull cfg.sslServerCert then '' (if cfg.sslServerCert == null then ''
ssl = no ssl = no
disable_plaintext_auth = no disable_plaintext_auth = no
'' else '' '' else ''
ssl_cert = <${cfg.sslServerCert} ssl_cert = <${cfg.sslServerCert}
ssl_key = <${cfg.sslServerKey} ssl_key = <${cfg.sslServerKey}
${optionalString (!(isNull cfg.sslCACert)) ("ssl_ca = <" + cfg.sslCACert)} ${optionalString (cfg.sslCACert != null) ("ssl_ca = <" + cfg.sslCACert)}
ssl_dh = <${config.security.dhparams.params.dovecot2.path} ssl_dh = <${config.security.dhparams.params.dovecot2.path}
disable_plaintext_auth = yes disable_plaintext_auth = yes
'') '')
@ -298,7 +298,7 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.pam.services.dovecot2 = mkIf cfg.enablePAM {}; security.pam.services.dovecot2 = mkIf cfg.enablePAM {};
security.dhparams = mkIf (! isNull cfg.sslServerCert) { security.dhparams = mkIf (cfg.sslServerCert != null) {
enable = true; enable = true;
params.dovecot2 = {}; params.dovecot2 = {};
}; };
@ -384,14 +384,14 @@ in
{ assertion = intersectLists cfg.protocols [ "pop3" "imap" ] != []; { assertion = intersectLists cfg.protocols [ "pop3" "imap" ] != [];
message = "dovecot needs at least one of the IMAP or POP3 listeners enabled"; message = "dovecot needs at least one of the IMAP or POP3 listeners enabled";
} }
{ assertion = isNull cfg.sslServerCert == isNull cfg.sslServerKey { assertion = (cfg.sslServerCert == null) == (cfg.sslServerKey == null)
&& (!(isNull cfg.sslCACert) -> !(isNull cfg.sslServerCert || isNull cfg.sslServerKey)); && (cfg.sslCACert != null -> !(cfg.sslServerCert == null || cfg.sslServerKey == null));
message = "dovecot needs both sslServerCert and sslServerKey defined for working crypto"; message = "dovecot needs both sslServerCert and sslServerKey defined for working crypto";
} }
{ assertion = cfg.showPAMFailure -> cfg.enablePAM; { assertion = cfg.showPAMFailure -> cfg.enablePAM;
message = "dovecot is configured with showPAMFailure while enablePAM is disabled"; message = "dovecot is configured with showPAMFailure while enablePAM is disabled";
} }
{ assertion = (cfg.sieveScripts != {}) -> ((cfg.mailUser != null) && (cfg.mailGroup != null)); { assertion = cfg.sieveScripts != {} -> (cfg.mailUser != null && cfg.mailGroup != null);
message = "dovecot requires mailUser and mailGroup to be set when sieveScripts is set"; message = "dovecot requires mailUser and mailGroup to be set when sieveScripts is set";
} }
]; ];

View File

@ -143,7 +143,7 @@ in
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";
PrivateTmp = true; PrivateTmp = true;
ExecStartPre = assert !isNull server.secretKeyFile; pkgs.writeScript "bepasty-server.${name}-init" '' ExecStartPre = assert server.secretKeyFile != null; pkgs.writeScript "bepasty-server.${name}-init" ''
#!/bin/sh #!/bin/sh
mkdir -p "${server.workDir}" mkdir -p "${server.workDir}"
mkdir -p "${server.dataDir}" mkdir -p "${server.dataDir}"

View File

@ -81,7 +81,7 @@ in {
systemd.services = mapAttrs' (name: instanceCfg: nameValuePair "errbot-${name}" ( systemd.services = mapAttrs' (name: instanceCfg: nameValuePair "errbot-${name}" (
let let
dataDir = if !isNull instanceCfg.dataDir then instanceCfg.dataDir else dataDir = if instanceCfg.dataDir != null then instanceCfg.dataDir else
"/var/lib/errbot/${name}"; "/var/lib/errbot/${name}";
in { in {
after = [ "network-online.target" ]; after = [ "network-online.target" ];

View File

@ -48,7 +48,7 @@ let
type = types.nullOr types.int; type = types.nullOr types.int;
default = null; default = null;
example = 365; example = 365;
apply = val: if isNull val then -1 else val; apply = val: if val == null then -1 else val;
description = mkAutoDesc '' description = mkAutoDesc ''
The expiration time of ${desc} in days or <literal>null</literal> for no The expiration time of ${desc} in days or <literal>null</literal> for no
expiration time. expiration time.
@ -82,7 +82,7 @@ let
then attrByPath newPath (notFound newPath) cfg.pki.manual then attrByPath newPath (notFound newPath) cfg.pki.manual
else findPkiDefinitions newPath val; else findPkiDefinitions newPath val;
in flatten (mapAttrsToList mkSublist attrs); in flatten (mapAttrsToList mkSublist attrs);
in all isNull (findPkiDefinitions [] manualPkiOptions); in all (x: x == null) (findPkiDefinitions [] manualPkiOptions);
orgOptions = { ... }: { orgOptions = { ... }: {
options.users = mkOption { options.users = mkOption {

View File

@ -17,7 +17,7 @@ let
defaultDir = "/var/lib/${user}"; defaultDir = "/var/lib/${user}";
home = if useCustomDir then cfg.storageDir else defaultDir; home = if useCustomDir then cfg.storageDir else defaultDir;
useCustomDir = !(builtins.isNull cfg.storageDir); useCustomDir = cfg.storageDir != null;
socket = "/run/phpfpm/${dirName}.sock"; socket = "/run/phpfpm/${dirName}.sock";

View File

@ -19,13 +19,13 @@ let
graphiteLocalSettings = pkgs.writeText "graphite_local_settings.py" ( graphiteLocalSettings = pkgs.writeText "graphite_local_settings.py" (
"STATIC_ROOT = '${staticDir}'\n" + "STATIC_ROOT = '${staticDir}'\n" +
optionalString (! isNull config.time.timeZone) "TIME_ZONE = '${config.time.timeZone}'\n" optionalString (config.time.timeZone != null) "TIME_ZONE = '${config.time.timeZone}'\n"
+ cfg.web.extraConfig + cfg.web.extraConfig
); );
graphiteApiConfig = pkgs.writeText "graphite-api.yaml" '' graphiteApiConfig = pkgs.writeText "graphite-api.yaml" ''
search_index: ${dataDir}/index search_index: ${dataDir}/index
${optionalString (!isNull config.time.timeZone) ''time_zone: ${config.time.timeZone}''} ${optionalString (config.time.timeZone != null) ''time_zone: ${config.time.timeZone}''}
${optionalString (cfg.api.finders != []) ''finders:''} ${optionalString (cfg.api.finders != []) ''finders:''}
${concatMapStringsSep "\n" (f: " - " + f.moduleName) cfg.api.finders} ${concatMapStringsSep "\n" (f: " - " + f.moduleName) cfg.api.finders}
${optionalString (cfg.api.functions != []) ''functions:''} ${optionalString (cfg.api.functions != []) ''functions:''}

View File

@ -0,0 +1,195 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.bitcoind;
pidFile = "${cfg.dataDir}/bitcoind.pid";
configFile = pkgs.writeText "bitcoin.conf" ''
${optionalString cfg.testnet "testnet=1"}
${optionalString (cfg.dbCache != null) "dbcache=${toString cfg.dbCache}"}
${optionalString (cfg.prune != null) "prune=${toString cfg.prune}"}
# Connection options
${optionalString (cfg.port != null) "port=${toString cfg.port}"}
# RPC server options
${optionalString (cfg.rpc.port != null) "rpcport=${toString cfg.rpc.port}"}
${concatMapStringsSep "\n"
(rpcUser: "rpcauth=${rpcUser.name}:${rpcUser.passwordHMAC}")
(attrValues cfg.rpc.users)
}
# Extra config options (from bitcoind nixos service)
${cfg.extraConfig}
'';
cmdlineOptions = escapeShellArgs [
"-conf=${cfg.configFile}"
"-datadir=${cfg.dataDir}"
"-pid=${pidFile}"
];
hexStr = types.strMatching "[0-9a-f]+";
rpcUserOpts = { name, ... }: {
options = {
name = mkOption {
type = types.str;
example = "alice";
description = ''
Username for JSON-RPC connections.
'';
};
passwordHMAC = mkOption {
type = with types; uniq (strMatching "[0-9a-f]+\\$[0-9a-f]{64}");
example = "f7efda5c189b999524f151318c0c86$d5b51b3beffbc02b724e5d095828e0bc8b2456e9ac8757ae3211a5d9b16a22ae";
description = ''
Password HMAC-SHA-256 for JSON-RPC connections. Must be a string of the
format &lt;SALT-HEX&gt;$&lt;HMAC-HEX&gt;.
'';
};
};
config = {
name = mkDefault name;
};
};
in {
options = {
services.bitcoind = {
enable = mkEnableOption "Bitcoin daemon";
package = mkOption {
type = types.package;
default = pkgs.altcoins.bitcoind;
defaultText = "pkgs.altcoins.bitcoind";
description = "The package providing bitcoin binaries.";
};
configFile = mkOption {
type = types.path;
default = configFile;
example = "/etc/bitcoind.conf";
description = "The configuration file path to supply bitcoind.";
};
extraConfig = mkOption {
type = types.lines;
default = "";
example = ''
par=16
rpcthreads=16
logips=1
'';
description = "Additional configurations to be appended to <filename>bitcoin.conf</filename>.";
};
dataDir = mkOption {
type = types.path;
default = "/var/lib/bitcoind";
description = "The data directory for bitcoind.";
};
user = mkOption {
type = types.str;
default = "bitcoin";
description = "The user as which to run bitcoind.";
};
group = mkOption {
type = types.str;
default = cfg.user;
description = "The group as which to run bitcoind.";
};
rpc = {
port = mkOption {
type = types.nullOr types.port;
default = null;
description = "Override the default port on which to listen for JSON-RPC connections.";
};
users = mkOption {
default = {};
example = literalExample ''
{
alice.passwordHMAC = "f7efda5c189b999524f151318c0c86$d5b51b3beffbc02b724e5d095828e0bc8b2456e9ac8757ae3211a5d9b16a22ae";
bob.passwordHMAC = "b2dd077cb54591a2f3139e69a897ac$4e71f08d48b4347cf8eff3815c0e25ae2e9a4340474079f55705f40574f4ec99";
}
'';
type = with types; loaOf (submodule rpcUserOpts);
description = ''
RPC user information for JSON-RPC connnections.
'';
};
};
testnet = mkOption {
type = types.bool;
default = false;
description = "Whether to use the test chain.";
};
port = mkOption {
type = types.nullOr types.port;
default = null;
description = "Override the default port on which to listen for connections.";
};
dbCache = mkOption {
type = types.nullOr (types.ints.between 4 16384);
default = null;
example = 4000;
description = "Override the default database cache size in megabytes.";
};
prune = mkOption {
type = types.nullOr (types.coercedTo
(types.enum [ "disable" "manual" ])
(x: if x == "disable" then 0 else 1)
types.ints.unsigned
);
default = null;
example = 10000;
description = ''
Reduce storage requirements by enabling pruning (deleting) of old
blocks. This allows the pruneblockchain RPC to be called to delete
specific blocks, and enables automatic pruning of old blocks if a
target size in MiB is provided. This mode is incompatible with -txindex
and -rescan. Warning: Reverting this setting requires re-downloading
the entire blockchain. ("disable" = disable pruning blocks, "manual"
= allow manual pruning via RPC, >=550 = automatically prune block files
to stay under the specified target size in MiB)
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' 0770 '${cfg.user}' '${cfg.group}' - -"
"L '${cfg.dataDir}/bitcoin.conf' - - - - '${cfg.configFile}'"
];
systemd.services.bitcoind = {
description = "Bitcoin daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = cfg.user;
Group = cfg.group;
ExecStart = "${cfg.package}/bin/bitcoind ${cmdlineOptions}";
Restart = "on-failure";
# Hardening measures
PrivateTmp = "true";
ProtectSystem = "full";
NoNewPrivileges = "true";
PrivateDevices = "true";
MemoryDenyWriteExecute = "true";
# Permission for preStart
PermissionsStartOnly = "true";
};
};
users.users.${cfg.user} = {
name = cfg.user;
group = cfg.group;
description = "Bitcoin daemon user";
home = cfg.dataDir;
};
users.groups.${cfg.group} = {
name = cfg.group;
};
};
}

View File

@ -92,7 +92,7 @@ in {
Needed when running with Kubernetes as backend as this cannot be auto-detected"; Needed when running with Kubernetes as backend as this cannot be auto-detected";
''; '';
type = types.nullOr types.str; type = types.nullOr types.str;
default = with config.networking; (hostName + optionalString (!isNull domain) ".${domain}"); default = with config.networking; (hostName + optionalString (domain != null) ".${domain}");
example = "node1.example.com"; example = "node1.example.com";
}; };

View File

@ -12,9 +12,9 @@ let
boolOpt = k: v: k + " = " + boolToString v; boolOpt = k: v: k + " = " + boolToString v;
intOpt = k: v: k + " = " + toString v; intOpt = k: v: k + " = " + toString v;
lstOpt = k: xs: k + " = " + concatStringsSep "," xs; lstOpt = k: xs: k + " = " + concatStringsSep "," xs;
optionalNullString = o: s: optional (! isNull s) (strOpt o s); optionalNullString = o: s: optional (s != null) (strOpt o s);
optionalNullBool = o: b: optional (! isNull b) (boolOpt o b); optionalNullBool = o: b: optional (b != null) (boolOpt o b);
optionalNullInt = o: i: optional (! isNull i) (intOpt o i); optionalNullInt = o: i: optional (i != null) (intOpt o i);
optionalEmptyList = o: l: optional ([] != l) (lstOpt o l); optionalEmptyList = o: l: optional ([] != l) (lstOpt o l);
mkEnableTrueOption = name: mkEnableOption name // { default = true; }; mkEnableTrueOption = name: mkEnableOption name // { default = true; };
@ -225,7 +225,7 @@ let
i2pdSh = pkgs.writeScriptBin "i2pd" '' i2pdSh = pkgs.writeScriptBin "i2pd" ''
#!/bin/sh #!/bin/sh
exec ${pkgs.i2pd}/bin/i2pd \ exec ${pkgs.i2pd}/bin/i2pd \
${if isNull cfg.address then "" else "--host="+cfg.address} \ ${if cfg.address == null then "" else "--host="+cfg.address} \
--service \ --service \
--conf=${i2pdConf} \ --conf=${i2pdConf} \
--tunconf=${tunnelConf} --tunconf=${tunnelConf}

View File

@ -103,20 +103,12 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
# mxisd / spring.boot needs the configuration to be named "application.yaml"
preStart = ''
config=${cfg.dataDir}/application.yaml
cp ${configFile} $config
chmod 444 $config
'';
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";
User = "mxisd"; User = "mxisd";
Group = "mxisd"; Group = "mxisd";
ExecStart = "${cfg.package}/bin/mxisd --spring.config.location=${cfg.dataDir}/ --spring.profiles.active=systemd --java.security.egd=file:/dev/./urandom"; ExecStart = "${cfg.package}/bin/mxisd -c ${configFile}";
WorkingDirectory = cfg.dataDir; WorkingDirectory = cfg.dataDir;
SuccessExitStatus = 143;
Restart = "on-failure"; Restart = "on-failure";
}; };
}; };

View File

@ -422,6 +422,13 @@ in
description = "List of administrators of the current host"; description = "List of administrators of the current host";
}; };
authentication = mkOption {
type = types.enum [ "internal_plain" "internal_hashed" "cyrus" "anonymous" ];
default = "internal_hashed";
example = "internal_plain";
description = "Authentication mechanism used for logins.";
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
@ -477,6 +484,7 @@ in
s2s_secure_domains = ${toLua cfg.s2sSecureDomains} s2s_secure_domains = ${toLua cfg.s2sSecureDomains}
authentication = ${toLua cfg.authentication}
${ cfg.extraConfig } ${ cfg.extraConfig }

View File

@ -4,6 +4,15 @@ with lib;
let let
sshconf = pkgs.runCommand "sshd.conf-validated" { nativeBuildInputs = [ cfgc.package ]; } ''
cat >$out <<EOL
${cfg.extraConfig}
EOL
ssh-keygen -f mock-hostkey -N ""
sshd -t -f $out -h mock-hostkey
'';
cfg = config.services.openssh; cfg = config.services.openssh;
cfgc = config.programs.ssh; cfgc = config.programs.ssh;
@ -339,7 +348,7 @@ in
environment.etc = authKeysFiles // environment.etc = authKeysFiles //
{ "ssh/moduli".source = cfg.moduliFile; { "ssh/moduli".source = cfg.moduliFile;
"ssh/sshd_config".text = cfg.extraConfig; "ssh/sshd_config".source = sshconf;
}; };
systemd = systemd =

View File

@ -56,7 +56,7 @@ rec {
}; };
documentDefault = description : strongswanDefault : documentDefault = description : strongswanDefault :
if isNull strongswanDefault if strongswanDefault == null
then description then description
else description + '' else description + ''
</para><para> </para><para>

View File

@ -45,10 +45,10 @@ rec {
filterEmptySets ( filterEmptySets (
(mapParamsRecursive (path: name: param: (mapParamsRecursive (path: name: param:
let value = attrByPath path null cfg; let value = attrByPath path null cfg;
in optionalAttrs (!isNull value) (param.render name value) in optionalAttrs (value != null) (param.render name value)
) ps)); ) ps));
filterEmptySets = set : filterAttrs (n: v: !(isNull v)) (mapAttrs (name: value: filterEmptySets = set : filterAttrs (n: v: (v != null)) (mapAttrs (name: value:
if isAttrs value if isAttrs value
then let value' = filterEmptySets value; then let value' = filterEmptySets value;
in if value' == {} in if value' == {}

View File

@ -5,6 +5,60 @@ with lib;
let let
cfg = config.services.syncthing; cfg = config.services.syncthing;
defaultUser = "syncthing"; defaultUser = "syncthing";
devices = mapAttrsToList (name: device: {
deviceID = device.id;
inherit (device) name addresses introducer;
}) cfg.declarative.devices;
folders = mapAttrsToList ( _: folder: {
inherit (folder) path id label type;
devices = map (device: { deviceId = cfg.declarative.devices.${device}.id; }) folder.devices;
rescanIntervalS = folder.rescanInterval;
fsWatcherEnabled = folder.watch;
fsWatcherDelayS = folder.watchDelay;
ignorePerms = folder.ignorePerms;
}) (filterAttrs (
_: folder:
folder.enable
) cfg.declarative.folders);
# get the api key by parsing the config.xml
getApiKey = pkgs.writers.writeDash "getAPIKey" ''
${pkgs.libxml2}/bin/xmllint \
--xpath 'string(configuration/gui/apikey)'\
${cfg.configDir}/config.xml
'';
updateConfig = pkgs.writers.writeDash "merge-syncthing-config" ''
set -efu
# wait for syncthing port to open
until ${pkgs.curl}/bin/curl -Ss ${cfg.guiAddress} -o /dev/null; do
sleep 1
done
API_KEY=$(${getApiKey})
OLD_CFG=$(${pkgs.curl}/bin/curl -Ss \
-H "X-API-Key: $API_KEY" \
${cfg.guiAddress}/rest/system/config)
# generate the new config by merging with the nixos config options
NEW_CFG=$(echo "$OLD_CFG" | ${pkgs.jq}/bin/jq -s '.[] as $in | $in * {
"devices": (${builtins.toJSON devices}${optionalString (! cfg.declarative.overrideDevices) " + $in.devices"}),
"folders": (${builtins.toJSON folders}${optionalString (! cfg.declarative.overrideFolders) " + $in.folders"})
}')
# POST the new config to syncthing
echo "$NEW_CFG" | ${pkgs.curl}/bin/curl -Ss \
-H "X-API-Key: $API_KEY" \
${cfg.guiAddress}/rest/system/config -d @-
# restart syncthing after sending the new config
${pkgs.curl}/bin/curl -Ss \
-H "X-API-Key: $API_KEY" \
-X POST \
${cfg.guiAddress}/rest/system/restart
'';
in { in {
###### interface ###### interface
options = { options = {
@ -16,6 +70,197 @@ in {
available on http://127.0.0.1:8384/. available on http://127.0.0.1:8384/.
''; '';
declarative = {
cert = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Path to users cert.pem file, will be copied into the syncthing's
<literal>configDir</literal>
'';
};
key = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Path to users key.pem file, will be copied into the syncthing's
<literal>configDir</literal>
'';
};
overrideDevices = mkOption {
type = types.bool;
default = true;
description = ''
Whether to delete the devices which are not configured via the
<literal>declarative.devices</literal> option.
If set to false, devices added via the webinterface will
persist but will have to be deleted manually.
'';
};
devices = mkOption {
default = {};
description = ''
Peers/devices which syncthing should communicate with.
'';
example = [
{
name = "bigbox";
id = "7CFNTQM-IMTJBHJ-3UWRDIU-ZGQJFR6-VCXZ3NB-XUH3KZO-N52ITXR-LAIYUAU";
addresses = [ "tcp://192.168.0.10:51820" ];
}
];
type = types.attrsOf (types.submodule ({ config, ... }: {
options = {
name = mkOption {
type = types.str;
default = config._module.args.name;
description = ''
Name of the device
'';
};
addresses = mkOption {
type = types.listOf types.str;
default = [];
description = ''
The addresses used to connect to the device.
If this is let empty, dynamic configuration is attempted
'';
};
id = mkOption {
type = types.str;
description = ''
The id of the other peer, this is mandatory. It's documented at
https://docs.syncthing.net/dev/device-ids.html
'';
};
introducer = mkOption {
type = types.bool;
default = false;
description = ''
If the device should act as an introducer and be allowed
to add folders on this computer.
'';
};
};
}));
};
overrideFolders = mkOption {
type = types.bool;
default = true;
description = ''
Whether to delete the folders which are not configured via the
<literal>declarative.folders</literal> option.
If set to false, folders added via the webinterface will persist
but will have to be deleted manually.
'';
};
folders = mkOption {
default = {};
description = ''
folders which should be shared by syncthing.
'';
type = types.attrsOf (types.submodule ({ config, ... }: {
options = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
share this folder.
This option is useful when you want to define all folders
in one place, but not every machine should share all folders.
'';
};
path = mkOption {
type = types.str;
default = config._module.args.name;
description = ''
The path to the folder which should be shared.
'';
};
id = mkOption {
type = types.str;
default = config._module.args.name;
description = ''
The id of the folder. Must be the same on all devices.
'';
};
label = mkOption {
type = types.str;
default = config._module.args.name;
description = ''
The label of the folder.
'';
};
devices = mkOption {
type = types.listOf types.str;
default = [];
description = ''
The devices this folder should be shared with. Must be defined
in the <literal>declarative.devices</literal> attribute.
'';
};
rescanInterval = mkOption {
type = types.int;
default = 3600;
description = ''
How often the folders should be rescaned for changes.
'';
};
type = mkOption {
type = types.enum [ "sendreceive" "sendonly" "receiveonly" ];
default = "sendreceive";
description = ''
Whether to send only changes from this folder, only receive them
or propagate both.
'';
};
watch = mkOption {
type = types.bool;
default = true;
description = ''
Whether the folder should be watched for changes by inotify.
'';
};
watchDelay = mkOption {
type = types.int;
default = 10;
description = ''
The delay after an inotify event is triggered.
'';
};
ignorePerms = mkOption {
type = types.bool;
default = true;
description = ''
Whether to propagate permission changes.
'';
};
};
}));
};
};
guiAddress = mkOption { guiAddress = mkOption {
type = types.str; type = types.str;
default = "127.0.0.1:8384"; default = "127.0.0.1:8384";
@ -151,6 +396,23 @@ in {
RestartForceExitStatus="3 4"; RestartForceExitStatus="3 4";
User = cfg.user; User = cfg.user;
Group = cfg.group; Group = cfg.group;
ExecStartPre = mkIf (cfg.declarative.cert != null || cfg.declarative.key != null)
"+${pkgs.writers.writeBash "syncthing-copy-keys" ''
mkdir -p ${cfg.configDir}
chown ${cfg.user}:${cfg.group} ${cfg.configDir}
chmod 700 ${cfg.configDir}
${optionalString (cfg.declarative.cert != null) ''
cp ${toString cfg.declarative.cert} ${cfg.configDir}/cert.pem
chown ${cfg.user}:${cfg.group} ${cfg.configDir}/cert.pem
chmod 400 ${cfg.configDir}/cert.pem
''}
${optionalString (cfg.declarative.key != null) ''
cp ${toString cfg.declarative.key} ${cfg.configDir}/key.pem
chown ${cfg.user}:${cfg.group} ${cfg.configDir}/key.pem
chmod 400 ${cfg.configDir}/key.pem
''}
''}"
;
ExecStart = '' ExecStart = ''
${cfg.package}/bin/syncthing \ ${cfg.package}/bin/syncthing \
-no-browser \ -no-browser \
@ -159,6 +421,17 @@ in {
''; '';
}; };
}; };
syncthing-init = {
after = [ "syncthing.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = cfg.user;
RemainAfterExit = true;
Type = "oneshot";
ExecStart = updateConfig;
};
};
syncthing-resume = { syncthing-resume = {
wantedBy = [ "suspend.target" ]; wantedBy = [ "suspend.target" ];

View File

@ -153,7 +153,6 @@ in
({ ({
description = "Tinc Daemon - ${network}"; description = "Tinc Daemon - ${network}";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ data.package ]; path = [ data.package ];
restartTriggers = [ config.environment.etc."tinc/${network}/tinc.conf".source ]; restartTriggers = [ config.environment.etc."tinc/${network}/tinc.conf".source ];
serviceConfig = { serviceConfig = {

View File

@ -146,7 +146,7 @@ in
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
path = [ pkgs.xinetd ]; path = [ pkgs.xinetd ];
script = "xinetd -syslog daemon -dontfork -stayalive -f ${configFile}"; script = "exec xinetd -syslog daemon -dontfork -stayalive -f ${configFile}";
}; };
}; };
} }

View File

@ -129,7 +129,7 @@ in {
This defaults to the singleton list [ca] when the <option>ca</option> option is defined. This defaults to the singleton list [ca] when the <option>ca</option> option is defined.
''; '';
default = if isNull cfg.elasticsearch.ca then [] else [ca]; default = if cfg.elasticsearch.ca == null then [] else [ca];
type = types.listOf types.path; type = types.listOf types.path;
}; };

View File

@ -25,6 +25,16 @@ in
''; '';
}; };
package = mkOption {
type = types.package;
default = pkgs.fprintd;
defaultText = "pkgs.fprintd";
example = "pkgs.fprintd-thinkpad";
description = ''
fprintd package to use.
'';
};
}; };
}; };
@ -38,7 +48,7 @@ in
environment.systemPackages = [ pkgs.fprintd ]; environment.systemPackages = [ pkgs.fprintd ];
systemd.packages = [ pkgs.fprintd ]; systemd.packages = [ cfg.package ];
}; };

View File

@ -58,11 +58,11 @@ let
httponly = cookie.httpOnly; httponly = cookie.httpOnly;
}; };
set-xauthrequest = setXauthrequest; set-xauthrequest = setXauthrequest;
} // lib.optionalAttrs (!isNull cfg.email.addresses) { } // lib.optionalAttrs (cfg.email.addresses != null) {
authenticated-emails-file = authenticatedEmailsFile; authenticated-emails-file = authenticatedEmailsFile;
} // lib.optionalAttrs (cfg.passBasicAuth) { } // lib.optionalAttrs (cfg.passBasicAuth) {
basic-auth-password = cfg.basicAuthPassword; basic-auth-password = cfg.basicAuthPassword;
} // lib.optionalAttrs (!isNull cfg.htpasswd.file) { } // lib.optionalAttrs (cfg.htpasswd.file != null) {
display-htpasswd-file = cfg.htpasswd.displayForm; display-htpasswd-file = cfg.htpasswd.displayForm;
} // lib.optionalAttrs tls.enable { } // lib.optionalAttrs tls.enable {
tls-cert = tls.certificate; tls-cert = tls.certificate;
@ -71,7 +71,7 @@ let
} // (getProviderOptions cfg cfg.provider) // cfg.extraConfig; } // (getProviderOptions cfg cfg.provider) // cfg.extraConfig;
mapConfig = key: attr: mapConfig = key: attr:
if (!isNull attr && attr != []) then ( if attr != null && attr != [] then (
if isDerivation attr then mapConfig key (toString attr) else if isDerivation attr then mapConfig key (toString attr) else
if (builtins.typeOf attr) == "set" then concatStringsSep " " if (builtins.typeOf attr) == "set" then concatStringsSep " "
(mapAttrsToList (name: value: mapConfig (key + "-" + name) value) attr) else (mapAttrsToList (name: value: mapConfig (key + "-" + name) value) attr) else
@ -538,7 +538,7 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.oauth2_proxy = mkIf (!isNull cfg.keyFile) { services.oauth2_proxy = mkIf (cfg.keyFile != null) {
clientID = mkDefault null; clientID = mkDefault null;
clientSecret = mkDefault null; clientSecret = mkDefault null;
cookie.secret = mkDefault null; cookie.secret = mkDefault null;

View File

@ -215,7 +215,7 @@ in {
# /etc/icingaweb2 # /etc/icingaweb2
environment.etc = let environment.etc = let
doModule = name: optionalAttrs (cfg.modules."${name}".enable) (nameValuePair "icingaweb2/enabledModules/${name}" { source = "${pkgs.icingaweb2}/modules/${name}"; }); doModule = name: optionalAttrs (cfg.modules."${name}".enable) { "icingaweb2/enabledModules/${name}".source = "${pkgs.icingaweb2}/modules/${name}"; };
in {} in {}
# Module packages # Module packages
// (mapAttrs' (k: v: nameValuePair "icingaweb2/enabledModules/${k}" { source = v; }) cfg.modulePackages) // (mapAttrs' (k: v: nameValuePair "icingaweb2/enabledModules/${k}" { source = v; }) cfg.modulePackages)

View File

@ -85,7 +85,7 @@ in
DynamicUser = true; DynamicUser = true;
RuntimeDirectory = "miniflux"; RuntimeDirectory = "miniflux";
RuntimeDirectoryMode = "0700"; RuntimeDirectoryMode = "0700";
EnvironmentFile = if isNull cfg.adminCredentialsFile EnvironmentFile = if cfg.adminCredentialsFile == null
then defaultCredentials then defaultCredentials
else cfg.adminCredentialsFile; else cfg.adminCredentialsFile;
}; };

View File

@ -257,6 +257,23 @@ in {
''; '';
}; };
}; };
autoUpdateApps = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Run regular auto update of all apps installed from the nextcloud app store.
'';
};
startAt = mkOption {
type = with types; either str (listOf str);
default = "05:00:00";
example = "Sun 14:00:00";
description = ''
When to run the update. See `systemd.services.&lt;name&gt;.startAt`.
'';
};
};
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [
@ -362,6 +379,11 @@ in {
serviceConfig.User = "nextcloud"; serviceConfig.User = "nextcloud";
serviceConfig.ExecStart = "${phpPackage}/bin/php -f ${pkgs.nextcloud}/cron.php"; serviceConfig.ExecStart = "${phpPackage}/bin/php -f ${pkgs.nextcloud}/cron.php";
}; };
"nextcloud-update-plugins" = mkIf cfg.autoUpdateApps.enable {
serviceConfig.Type = "oneshot";
serviceConfig.ExecStart = "${occ}/bin/nextcloud-occ app:update --all";
startAt = cfg.autoUpdateApps.startAt;
};
}; };
services.phpfpm = { services.phpfpm = {

View File

@ -111,5 +111,11 @@
<link xlink:href="https://github.com/NixOS/nixpkgs/issues/49783">#49783</link>, <link xlink:href="https://github.com/NixOS/nixpkgs/issues/49783">#49783</link>,
for now it's unfortunately necessary to manually work around these issues. for now it's unfortunately necessary to manually work around these issues.
</para> </para>
<para>
Right now app installation and configuration is done imperatively in the nextcloud web ui or via the <literal>nextcloud-occ</literal> command line utility.
You can activate auto updates for your apps via
<literal><link linkend="opt-services.nextcloud.autoUpdateApps.enable">services.nextcloud.autoUpdateApps</link></literal>.
</para>
</section> </section>
</chapter> </chapter>

View File

@ -184,7 +184,7 @@ in
phpOptions = '' phpOptions = ''
date.timezone = "CET" date.timezone = "CET"
${optionalString (!isNull cfg.email.server) '' ${optionalString (cfg.email.server != null) ''
SMTP = ${cfg.email.server} SMTP = ${cfg.email.server}
smtp_port = ${toString cfg.email.port} smtp_port = ${toString cfg.email.port}
auth_username = ${cfg.email.login} auth_username = ${cfg.email.login}
@ -282,7 +282,7 @@ in
sed -i "s@^php@${config.services.phpfpm.phpPackage}/bin/php@" "${runDir}/server/php/shell/"*.sh sed -i "s@^php@${config.services.phpfpm.phpPackage}/bin/php@" "${runDir}/server/php/shell/"*.sh
${if (isNull cfg.database.host) then '' ${if (cfg.database.host == null) then ''
sed -i "s/^.*'R_DB_HOST'.*$/define('R_DB_HOST', 'localhost');/g" "${runDir}/server/php/config.inc.php" sed -i "s/^.*'R_DB_HOST'.*$/define('R_DB_HOST', 'localhost');/g" "${runDir}/server/php/config.inc.php"
sed -i "s/^.*'R_DB_PASSWORD'.*$/define('R_DB_PASSWORD', 'restya');/g" "${runDir}/server/php/config.inc.php" sed -i "s/^.*'R_DB_PASSWORD'.*$/define('R_DB_PASSWORD', 'restya');/g" "${runDir}/server/php/config.inc.php"
'' else '' '' else ''
@ -311,7 +311,7 @@ in
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/media" chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/media"
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/client/img" chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/client/img"
${optionalString (isNull cfg.database.host) '' ${optionalString (cfg.database.host == null) ''
if ! [ -e "${cfg.dataDir}/.db-initialized" ]; then if ! [ -e "${cfg.dataDir}/.db-initialized" ]; then
${pkgs.sudo}/bin/sudo -u ${config.services.postgresql.superUser} \ ${pkgs.sudo}/bin/sudo -u ${config.services.postgresql.superUser} \
${config.services.postgresql.package}/bin/psql -U ${config.services.postgresql.superUser} \ ${config.services.postgresql.package}/bin/psql -U ${config.services.postgresql.superUser} \
@ -367,14 +367,14 @@ in
}; };
users.groups.restya-board = {}; users.groups.restya-board = {};
services.postgresql.enable = mkIf (isNull cfg.database.host) true; services.postgresql.enable = mkIf (cfg.database.host == null) true;
services.postgresql.identMap = optionalString (isNull cfg.database.host) services.postgresql.identMap = optionalString (cfg.database.host == null)
'' ''
restya-board-users restya-board restya_board restya-board-users restya-board restya_board
''; '';
services.postgresql.authentication = optionalString (isNull cfg.database.host) services.postgresql.authentication = optionalString (cfg.database.host == null)
'' ''
local restya_board all ident map=restya-board-users local restya_board all ident map=restya-board-users
''; '';

View File

@ -690,7 +690,7 @@ in
; Don't advertise PHP ; Don't advertise PHP
expose_php = off expose_php = off
'' + optionalString (!isNull config.time.timeZone) '' '' + optionalString (config.time.timeZone != null) ''
; Apparently PHP doesn't use $TZ. ; Apparently PHP doesn't use $TZ.
date.timezone = "${config.time.timeZone}" date.timezone = "${config.time.timeZone}"

View File

@ -1,129 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.winstone;
winstoneOpts = { name, ... }: {
options = {
name = mkOption {
default = name;
internal = true;
};
serviceName = mkOption {
type = types.str;
description = ''
The name of the systemd service. By default, it is
derived from the winstone instance name.
'';
};
warFile = mkOption {
type = types.str;
description = ''
The WAR file that Winstone should serve.
'';
};
javaPackage = mkOption {
type = types.package;
default = pkgs.jre;
defaultText = "pkgs.jre";
description = ''
Which Java derivation to use for running Winstone.
'';
};
user = mkOption {
type = types.str;
description = ''
The user that should run this Winstone process and
own the working directory.
'';
};
group = mkOption {
type = types.str;
description = ''
The group that will own the working directory.
'';
};
workDir = mkOption {
type = types.str;
description = ''
The working directory for this Winstone instance. Will
contain extracted webapps etc. The directory will be
created if it doesn't exist.
'';
};
extraJavaOptions = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Extra command line options given to the java process running
Winstone.
'';
};
extraOptions = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Extra command line options given to the Winstone process.
'';
};
};
config = {
workDir = mkDefault "/run/winstone/${name}";
serviceName = mkDefault "winstone-${name}";
};
};
mkService = cfg: let
opts = concatStringsSep " " (cfg.extraOptions ++ [
"--warfile ${cfg.warFile}"
]);
javaOpts = concatStringsSep " " (cfg.extraJavaOptions ++ [
"-Djava.io.tmpdir=${cfg.workDir}"
"-jar ${pkgs.winstone}/lib/winstone.jar"
]);
in {
wantedBy = [ "multi-user.target" ];
description = "winstone service for ${cfg.name}";
preStart = ''
mkdir -p "${cfg.workDir}"
chown ${cfg.user}:${cfg.group} "${cfg.workDir}"
'';
serviceConfig = {
ExecStart = "${cfg.javaPackage}/bin/java ${javaOpts} ${opts}";
User = cfg.user;
PermissionsStartOnly = true;
};
};
in {
options = {
services.winstone = mkOption {
default = {};
type = with types; attrsOf (submodule winstoneOpts);
description = ''
Defines independent Winstone services, each serving one WAR-file.
'';
};
};
config = mkIf (cfg != {}) {
systemd.services = mapAttrs' (n: c: nameValuePair c.serviceName (mkService c)) cfg;
};
}

View File

@ -29,6 +29,7 @@ in {
environment.etc."tmpfiles.d/colord.conf".source = "${pkgs.colord}/lib/tmpfiles.d/colord.conf"; environment.etc."tmpfiles.d/colord.conf".source = "${pkgs.colord}/lib/tmpfiles.d/colord.conf";
users.users.colord = { users.users.colord = {
isSystemUser = true;
home = "/var/lib/colord"; home = "/var/lib/colord";
group = "colord"; group = "colord";
}; };

View File

@ -120,9 +120,6 @@ in {
security.polkit.enable = true; security.polkit.enable = true;
services.udisks2.enable = true; services.udisks2.enable = true;
services.accounts-daemon.enable = true; services.accounts-daemon.enable = true;
services.geoclue2.enable = mkDefault true;
# GNOME should have its own geoclue agent
services.geoclue2.enableDemoAgent = false;
services.dleyna-renderer.enable = mkDefault true; services.dleyna-renderer.enable = mkDefault true;
services.dleyna-server.enable = mkDefault true; services.dleyna-server.enable = mkDefault true;
services.gnome3.at-spi2-core.enable = true; services.gnome3.at-spi2-core.enable = true;
@ -191,6 +188,24 @@ in {
'') cfg.sessionPath} '') cfg.sessionPath}
''; '';
services.geoclue2.enable = mkDefault true;
# GNOME should have its own geoclue agent
services.geoclue2.enableDemoAgent = false;
services.geoclue2.appConfig."gnome-datetime-panel" = {
isAllowed = true;
isSystem = true;
};
services.geoclue2.appConfig."gnome-color-panel" = {
isAllowed = true;
isSystem = true;
};
services.geoclue2.appConfig."org.gnome.Shell" = {
isAllowed = true;
isSystem = true;
};
environment.variables.GNOME_SESSION_DEBUG = optionalString cfg.debug "1"; environment.variables.GNOME_SESSION_DEBUG = optionalString cfg.debug "1";
# Override default mimeapps # Override default mimeapps

View File

@ -118,9 +118,6 @@ in
(mkIf config.services.printing.enable ([pkgs.system-config-printer]) ) (mkIf config.services.printing.enable ([pkgs.system-config-printer]) )
]; ];
services.pantheon.contractor.enable = mkDefault true; services.pantheon.contractor.enable = mkDefault true;
services.geoclue2.enable = mkDefault true;
# pantheon has pantheon-agent-geoclue2
services.geoclue2.enableDemoAgent = false;
services.gnome3.at-spi2-core.enable = true; services.gnome3.at-spi2-core.enable = true;
services.gnome3.evince.enable = mkDefault true; services.gnome3.evince.enable = mkDefault true;
services.gnome3.evolution-data-server.enable = true; services.gnome3.evolution-data-server.enable = true;
@ -140,6 +137,14 @@ in
services.xserver.updateDbusEnvironment = true; services.xserver.updateDbusEnvironment = true;
services.zeitgeist.enable = mkDefault true; services.zeitgeist.enable = mkDefault true;
services.geoclue2.enable = mkDefault true;
# pantheon has pantheon-agent-geoclue2
services.geoclue2.enableDemoAgent = false;
services.geoclue2.appConfig."io.elementary.desktop.agent-geoclue2" = {
isAllowed = true;
isSystem = true;
};
networking.networkmanager.enable = mkDefault true; networking.networkmanager.enable = mkDefault true;
networking.networkmanager.basePackages = networking.networkmanager.basePackages =
{ inherit (pkgs) networkmanager modemmanager wpa_supplicant; { inherit (pkgs) networkmanager modemmanager wpa_supplicant;

View File

@ -10,6 +10,14 @@ let
optionals cfg.enableContribAndExtras optionals cfg.enableContribAndExtras
[ self.xmonad-contrib self.xmonad-extras ]; [ self.xmonad-contrib self.xmonad-extras ];
}; };
xmonadBin = pkgs.writers.writeHaskell "xmonad" {
ghc = cfg.haskellPackages.ghc;
libraries = [ cfg.haskellPackages.xmonad ] ++
cfg.extraPackages cfg.haskellPackages ++
optionals cfg.enableContribAndExtras
(with cfg.haskellPackages; [ xmonad-contrib xmonad-extras ]);
} cfg.config;
in in
{ {
options = { options = {
@ -48,13 +56,36 @@ in
type = lib.types.bool; type = lib.types.bool;
description = "Enable xmonad-{contrib,extras} in Xmonad."; description = "Enable xmonad-{contrib,extras} in Xmonad.";
}; };
config = mkOption {
default = null;
type = with lib.types; nullOr (either path string);
description = ''
Configuration from which XMonad gets compiled. If no value
is specified, the xmonad config from $HOME/.xmonad is taken.
If you use xmonad --recompile, $HOME/.xmonad will be taken as
the configuration, but on the next restart of display-manager
this config will be reapplied.
'';
example = ''
import XMonad
main = launch defaultConfig
{ modMask = mod4Mask -- Use Super instead of Alt
, terminal = "urxvt"
}
'';
};
}; };
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.xserver.windowManager = { services.xserver.windowManager = {
session = [{ session = [{
name = "xmonad"; name = "xmonad";
start = '' start = if (cfg.config != null) then ''
${xmonadBin}
waitPID=$!
'' else ''
${xmonad}/bin/xmonad & ${xmonad}/bin/xmonad &
waitPID=$! waitPID=$!
''; '';

View File

@ -33,6 +33,15 @@ initrd {initrd}
options {kernel_params} options {kernel_params}
""" """
# The boot loader entry for memtest86.
#
# TODO: This is hard-coded to use the 64-bit EFI app, but it could probably
# be updated to use the 32-bit EFI app on 32-bit systems. The 32-bit EFI
# app filename is BOOTIA32.efi.
MEMTEST_BOOT_ENTRY = """title MemTest86
efi /efi/memtest86/BOOTX64.efi
"""
def write_loader_conf(profile, generation): def write_loader_conf(profile, generation):
with open("@efiSysMountPoint@/loader/loader.conf.tmp", 'w') as f: with open("@efiSysMountPoint@/loader/loader.conf.tmp", 'w') as f:
if "@timeout@" != "": if "@timeout@" != "":
@ -199,6 +208,24 @@ def main():
if os.readlink(system_dir(*gen)) == args.default_config: if os.readlink(system_dir(*gen)) == args.default_config:
write_loader_conf(*gen) write_loader_conf(*gen)
memtest_entry_file = "@efiSysMountPoint@/loader/entries/memtest86.conf"
if os.path.exists(memtest_entry_file):
os.unlink(memtest_entry_file)
shutil.rmtree("@efiSysMountPoint@/efi/memtest86", ignore_errors=True)
if "@memtest86@" != "":
mkdir_p("@efiSysMountPoint@/efi/memtest86")
for path in glob.iglob("@memtest86@/*"):
if os.path.isdir(path):
shutil.copytree(path, os.path.join("@efiSysMountPoint@/efi/memtest86", os.path.basename(path)))
else:
shutil.copy(path, "@efiSysMountPoint@/efi/memtest86/")
memtest_entry_file = "@efiSysMountPoint@/loader/entries/memtest86.conf"
memtest_entry_file_tmp_path = "%s.tmp" % memtest_entry_file
with open(memtest_entry_file_tmp_path, 'w') as f:
f.write(MEMTEST_BOOT_ENTRY)
os.rename(memtest_entry_file_tmp_path, memtest_entry_file)
# Since fat32 provides little recovery facilities after a crash, # Since fat32 provides little recovery facilities after a crash,
# it can leave the system in an unbootable state, when a crash/outage # it can leave the system in an unbootable state, when a crash/outage
# happens shortly after an update. To decrease the likelihood of this # happens shortly after an update. To decrease the likelihood of this

View File

@ -25,6 +25,8 @@ let
inherit (cfg) consoleMode; inherit (cfg) consoleMode;
inherit (efi) efiSysMountPoint canTouchEfiVariables; inherit (efi) efiSysMountPoint canTouchEfiVariables;
memtest86 = if cfg.memtest86.enable then pkgs.memtest86-efi else "";
}; };
in { in {
@ -85,6 +87,19 @@ in {
</itemizedlist> </itemizedlist>
''; '';
}; };
memtest86 = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Make MemTest86 available from the systemd-boot menu. MemTest86 is a
program for testing memory. MemTest86 is an unfree program, so
this requires <literal>allowUnfree</literal> to be set to
<literal>true</literal>.
'';
};
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {

View File

@ -174,13 +174,13 @@ let
"--rm" "--rm"
"--name=%n" "--name=%n"
"--log-driver=${container.log-driver}" "--log-driver=${container.log-driver}"
] ++ optional (! isNull container.entrypoint) ] ++ optional (container.entrypoint != null)
"--entrypoint=${escapeShellArg container.entrypoint}" "--entrypoint=${escapeShellArg container.entrypoint}"
++ (mapAttrsToList (k: v: "-e ${escapeShellArg k}=${escapeShellArg v}") container.environment) ++ (mapAttrsToList (k: v: "-e ${escapeShellArg k}=${escapeShellArg v}") container.environment)
++ map (p: "-p ${escapeShellArg p}") container.ports ++ map (p: "-p ${escapeShellArg p}") container.ports
++ optional (! isNull container.user) "-u ${escapeShellArg container.user}" ++ optional (container.user != null) "-u ${escapeShellArg container.user}"
++ map (v: "-v ${escapeShellArg v}") container.volumes ++ map (v: "-v ${escapeShellArg v}") container.volumes
++ optional (! isNull container.workdir) "-w ${escapeShellArg container.workdir}" ++ optional (container.workdir != null) "-w ${escapeShellArg container.workdir}"
++ map escapeShellArg container.extraDockerOptions ++ map escapeShellArg container.extraDockerOptions
++ [container.image] ++ [container.image]
++ map escapeShellArg container.cmd ++ map escapeShellArg container.cmd

View File

@ -46,8 +46,8 @@ in
description = description =
'' ''
When enabled dockerd is started on boot. This is required for When enabled dockerd is started on boot. This is required for
container, which are created with the containers which are created with the
<literal>--restart=always</literal> flag, to work. If this option is <literal>--restart=always</literal> flag to work. If this option is
disabled, docker might be started on demand by socket activation. disabled, docker might be started on demand by socket activation.
''; '';
}; };

View File

@ -51,7 +51,7 @@ in
popd popd
''; '';
format = "raw"; format = "raw";
configFile = if isNull cfg.configFile then defaultConfigFile else cfg.configFile; configFile = if cfg.configFile == null then defaultConfigFile else cfg.configFile;
inherit (cfg) diskSize; inherit (cfg) diskSize;
inherit config lib pkgs; inherit config lib pkgs;
}; };

View File

@ -89,11 +89,12 @@ in
gitlab = handleTest ./gitlab.nix {}; gitlab = handleTest ./gitlab.nix {};
gitolite = handleTest ./gitolite.nix {}; gitolite = handleTest ./gitolite.nix {};
gjs = handleTest ./gjs.nix {}; gjs = handleTest ./gjs.nix {};
google-oslogin = handleTest ./google-oslogin {};
gnome3 = handleTestOn ["x86_64-linux"] ./gnome3.nix {}; # libsmbios is unsupported on aarch64 gnome3 = handleTestOn ["x86_64-linux"] ./gnome3.nix {}; # libsmbios is unsupported on aarch64
gnome3-gdm = handleTestOn ["x86_64-linux"] ./gnome3-gdm.nix {}; # libsmbios is unsupported on aarch64 gnome3-gdm = handleTestOn ["x86_64-linux"] ./gnome3-gdm.nix {}; # libsmbios is unsupported on aarch64
gocd-agent = handleTest ./gocd-agent.nix {}; gocd-agent = handleTest ./gocd-agent.nix {};
gocd-server = handleTest ./gocd-server.nix {}; gocd-server = handleTest ./gocd-server.nix {};
google-oslogin = handleTest ./google-oslogin {};
graphene = handleTest ./graphene.nix {};
grafana = handleTest ./grafana.nix {}; grafana = handleTest ./grafana.nix {};
graphite = handleTest ./graphite.nix {}; graphite = handleTest ./graphite.nix {};
hadoop.hdfs = handleTestOn [ "x86_64-linux" ] ./hadoop/hdfs.nix {}; hadoop.hdfs = handleTestOn [ "x86_64-linux" ] ./hadoop/hdfs.nix {};
@ -151,6 +152,7 @@ in
mumble = handleTest ./mumble.nix {}; mumble = handleTest ./mumble.nix {};
munin = handleTest ./munin.nix {}; munin = handleTest ./munin.nix {};
mutableUsers = handleTest ./mutable-users.nix {}; mutableUsers = handleTest ./mutable-users.nix {};
mxisd = handleTest ./mxisd.nix {};
mysql = handleTest ./mysql.nix {}; mysql = handleTest ./mysql.nix {};
mysqlBackup = handleTest ./mysql-backup.nix {}; mysqlBackup = handleTest ./mysql-backup.nix {};
mysqlReplication = handleTest ./mysql-replication.nix {}; mysqlReplication = handleTest ./mysql-replication.nix {};
@ -220,6 +222,7 @@ in
rxe = handleTest ./rxe.nix {}; rxe = handleTest ./rxe.nix {};
samba = handleTest ./samba.nix {}; samba = handleTest ./samba.nix {};
sddm = handleTest ./sddm.nix {}; sddm = handleTest ./sddm.nix {};
signal-desktop = handleTest ./signal-desktop.nix {};
simple = handleTest ./simple.nix {}; simple = handleTest ./simple.nix {};
slim = handleTest ./slim.nix {}; slim = handleTest ./slim.nix {};
slurm = handleTest ./slurm.nix {}; slurm = handleTest ./slurm.nix {};
@ -230,12 +233,14 @@ in
strongswan-swanctl = handleTest ./strongswan-swanctl.nix {}; strongswan-swanctl = handleTest ./strongswan-swanctl.nix {};
sudo = handleTest ./sudo.nix {}; sudo = handleTest ./sudo.nix {};
switchTest = handleTest ./switch-test.nix {}; switchTest = handleTest ./switch-test.nix {};
syncthing-init = handleTest ./syncthing-init.nix {};
syncthing-relay = handleTest ./syncthing-relay.nix {}; syncthing-relay = handleTest ./syncthing-relay.nix {};
systemd = handleTest ./systemd.nix {}; systemd = handleTest ./systemd.nix {};
systemd-confinement = handleTest ./systemd-confinement.nix {}; systemd-confinement = handleTest ./systemd-confinement.nix {};
pdns-recursor = handleTest ./pdns-recursor.nix {}; pdns-recursor = handleTest ./pdns-recursor.nix {};
taskserver = handleTest ./taskserver.nix {}; taskserver = handleTest ./taskserver.nix {};
telegraf = handleTest ./telegraf.nix {}; telegraf = handleTest ./telegraf.nix {};
tinydns = handleTest ./tinydns.nix {};
tomcat = handleTest ./tomcat.nix {}; tomcat = handleTest ./tomcat.nix {};
tor = handleTest ./tor.nix {}; tor = handleTest ./tor.nix {};
transmission = handleTest ./transmission.nix {}; transmission = handleTest ./transmission.nix {};

18
nixos/tests/graphene.nix Normal file
View File

@ -0,0 +1,18 @@
# run installed tests
import ./make-test.nix ({ pkgs, ... }:
{
name = "graphene";
meta = {
maintainers = pkgs.graphene.meta.maintainers;
};
machine = { pkgs, ... }: {
environment.systemPackages = with pkgs; [ gnome-desktop-testing ];
};
testScript = ''
$machine->succeed("gnome-desktop-testing-runner -d '${pkgs.graphene.installedTests}/share'");
'';
})

View File

@ -8,7 +8,7 @@ import ./make-test.nix ({ pkgs, ...} : let
in { in {
name = "mongodb"; name = "mongodb";
meta = with pkgs.stdenv.lib.maintainers; { meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ bluescreen303 offline cstrahan rvl ]; maintainers = [ bluescreen303 offline cstrahan rvl phile314 ];
}; };
nodes = { nodes = {
@ -17,6 +17,12 @@ in {
{ {
services = { services = {
mongodb.enable = true; mongodb.enable = true;
mongodb.enableAuth = true;
mongodb.initialRootPassword = "root";
mongodb.initialScript = pkgs.writeText "mongodb_initial.js" ''
db = db.getSiblingDB("nixtest");
db.createUser({user:"nixtest",pwd:"nixtest",roles:[{role:"readWrite",db:"nixtest"}]});
'';
mongodb.extraConfig = '' mongodb.extraConfig = ''
# Allow starting engine with only a small virtual disk # Allow starting engine with only a small virtual disk
storage.journal.enabled: false storage.journal.enabled: false
@ -29,6 +35,6 @@ in {
testScript = '' testScript = ''
startAll; startAll;
$one->waitForUnit("mongodb.service"); $one->waitForUnit("mongodb.service");
$one->succeed("mongo nixtest ${testQuery}") =~ /hello/ or die; $one->succeed("mongo -u nixtest -p nixtest nixtest ${testQuery}") =~ /hello/ or die;
''; '';
}) })

View File

@ -10,7 +10,15 @@ import ./make-test.nix ({ pkgs, ...} : {
{ {
services.mysql.enable = true; services.mysql.enable = true;
services.mysql.initialDatabases = [ { name = "testdb"; schema = ./testdb.sql; } ]; services.mysql.initialDatabases = [
{ name = "testdb"; schema = ./testdb.sql; }
{ name = "empty_testdb"; }
];
# note that using pkgs.writeText here is generally not a good idea,
# as it will store the password in world-readable /nix/store ;)
services.mysql.initialScript = pkgs.writeText "mysql-init.sql" ''
CREATE USER 'passworduser'@'localhost' IDENTIFIED BY 'password123';
'';
services.mysql.package = pkgs.mysql; services.mysql.package = pkgs.mysql;
}; };
@ -36,11 +44,14 @@ import ./make-test.nix ({ pkgs, ...} : {
startAll; startAll;
$mysql->waitForUnit("mysql"); $mysql->waitForUnit("mysql");
$mysql->succeed("echo 'use testdb; select * from tests' | mysql -u root -N | grep 4"); $mysql->succeed("echo 'use empty_testdb;' | mysql -u root");
$mysql->succeed("echo 'use testdb; select * from tests;' | mysql -u root -N | grep 4");
# ';' acts as no-op, just check whether login succeeds with the user created from the initialScript
$mysql->succeed("echo ';' | mysql -u passworduser --password=password123");
$mariadb->waitForUnit("mysql"); $mariadb->waitForUnit("mysql");
$mariadb->succeed("echo 'use testdb; create table tests (test_id INT, PRIMARY KEY (test_id));' | sudo -u testuser mysql -u testuser"); $mariadb->succeed("echo 'use testdb; create table tests (test_id INT, PRIMARY KEY (test_id));' | sudo -u testuser mysql -u testuser");
$mariadb->succeed("echo 'use testdb; insert into tests values (42);' | sudo -u testuser mysql -u testuser"); $mariadb->succeed("echo 'use testdb; insert into tests values (42);' | sudo -u testuser mysql -u testuser");
$mariadb->succeed("echo 'use testdb; select test_id from tests' | sudo -u testuser mysql -u testuser -N | grep 42"); $mariadb->succeed("echo 'use testdb; select test_id from tests;' | sudo -u testuser mysql -u testuser -N | grep 42");
''; '';
}) })

View File

@ -22,6 +22,10 @@ in {
# Don't inherit adminuser since "root" is supposed to be the default # Don't inherit adminuser since "root" is supposed to be the default
inherit adminpass; inherit adminpass;
}; };
autoUpdateApps = {
enable = true;
startAt = "20:00";
};
}; };
}; };
}; };

View File

@ -7,7 +7,7 @@ with import ../lib/testing.nix { inherit system pkgs; };
with pkgs.lib; with pkgs.lib;
let let
redmineTest = package: makeTest { mysqlTest = package: makeTest {
machine = machine =
{ config, pkgs, ... }: { config, pkgs, ... }:
{ services.mysql.enable = true; { services.mysql.enable = true;
@ -21,6 +21,7 @@ let
services.redmine.enable = true; services.redmine.enable = true;
services.redmine.package = package; services.redmine.package = package;
services.redmine.database.type = "mysql2";
services.redmine.database.socket = "/run/mysqld/mysqld.sock"; services.redmine.database.socket = "/run/mysqld/mysqld.sock";
services.redmine.plugins = { services.redmine.plugins = {
redmine_env_auth = pkgs.fetchurl { redmine_env_auth = pkgs.fetchurl {
@ -38,7 +39,44 @@ let
testScript = '' testScript = ''
startAll; startAll;
$machine->waitForUnit('redmine.service');
$machine->waitForOpenPort('3000');
$machine->succeed("curl --fail http://localhost:3000/");
'';
};
pgsqlTest = package: makeTest {
machine =
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.ensureDatabases = [ "redmine" ];
services.postgresql.ensureUsers = [
{ name = "redmine";
ensurePermissions = { "DATABASE redmine" = "ALL PRIVILEGES"; };
}
];
services.redmine.enable = true;
services.redmine.package = package;
services.redmine.database.type = "postgresql";
services.redmine.database.host = "";
services.redmine.database.port = 5432;
services.redmine.plugins = {
redmine_env_auth = pkgs.fetchurl {
url = https://github.com/Intera/redmine_env_auth/archive/0.7.zip;
sha256 = "1xb8lyarc7mpi86yflnlgyllh9hfwb9z304f19dx409gqpia99sc";
};
};
services.redmine.themes = {
dkuk-redmine_alex_skin = pkgs.fetchurl {
url = https://bitbucket.org/dkuk/redmine_alex_skin/get/1842ef675ef3.zip;
sha256 = "0hrin9lzyi50k4w2bd2b30vrf1i4fi1c0gyas5801wn8i7kpm9yl";
};
};
};
testScript = ''
startAll;
$machine->waitForUnit('redmine.service'); $machine->waitForUnit('redmine.service');
$machine->waitForOpenPort('3000'); $machine->waitForOpenPort('3000');
$machine->succeed("curl --fail http://localhost:3000/"); $machine->succeed("curl --fail http://localhost:3000/");
@ -46,13 +84,18 @@ let
}; };
in in
{ {
redmine_3 = redmineTest pkgs.redmine // { v3-mysql = mysqlTest pkgs.redmine // {
name = "redmine_3"; name = "v3-mysql";
meta.maintainers = [ maintainers.aanderse ]; meta.maintainers = [ maintainers.aanderse ];
}; };
redmine_4 = redmineTest pkgs.redmine_4 // { v4-mysql = mysqlTest pkgs.redmine_4 // {
name = "redmine_4"; name = "v4-mysql";
meta.maintainers = [ maintainers.aanderse ];
};
v4-pgsql = pgsqlTest pkgs.redmine_4 // {
name = "v4-pgsql";
meta.maintainers = [ maintainers.aanderse ]; meta.maintainers = [ maintainers.aanderse ];
}; };
} }

View File

@ -0,0 +1,37 @@
import ./make-test.nix ({ pkgs, ...} :
{
name = "signal-desktop";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ flokli ];
};
machine = { ... }:
{
imports = [
./common/user-account.nix
./common/x11.nix
];
services.xserver.enable = true;
services.xserver.displayManager.auto.user = "alice";
environment.systemPackages = [ pkgs.signal-desktop ];
};
enableOCR = true;
testScript = { nodes, ... }: let
user = nodes.machine.config.users.users.alice;
in ''
startAll;
$machine->waitForX;
# start signal desktop
$machine->execute("su - alice -c signal-desktop &");
# wait for the "Link your phone to Signal Desktop" message
$machine->waitForText(qr/Link your phone to Signal Desktop/);
$machine->screenshot("signal_desktop");
'';
})

View File

@ -0,0 +1,30 @@
import ./make-test.nix ({ lib, pkgs, ... }: let
testId = "7CFNTQM-IMTJBHJ-3UWRDIU-ZGQJFR6-VCXZ3NB-XUH3KZO-N52ITXR-LAIYUAU";
in {
name = "syncthing-init";
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ lassulus ];
machine = {
services.syncthing = {
enable = true;
declarative = {
devices.testDevice = {
id = testId;
};
folders.testFolder = {
path = "/tmp/test";
devices = [ "testDevice" ];
};
};
};
};
testScript = ''
$machine->waitForUnit("syncthing-init.service");
$machine->succeed("cat /var/lib/syncthing/config.xml") =~ /${testId}/ or die;
$machine->succeed("cat /var/lib/syncthing/config.xml") =~ /testFolder/ or die;
'';
})

26
nixos/tests/tinydns.nix Normal file
View File

@ -0,0 +1,26 @@
import ./make-test.nix ({ lib, ...} : {
name = "tinydns";
meta = {
maintainers = with lib.maintainers; [ basvandijk ];
};
nodes = {
nameserver = { config, lib, ... } : let
ip = (lib.head config.networking.interfaces.eth1.ipv4.addresses).address;
in {
networking.nameservers = [ ip ];
services.tinydns = {
enable = true;
inherit ip;
data = ''
.foo.bar:${ip}
+.bla.foo.bar:1.2.3.4:300
'';
};
};
};
testScript = ''
$nameserver->start;
$nameserver->waitForUnit("tinydns.service");
$nameserver->succeed("host bla.foo.bar | grep '1\.2\.3\.4'");
'';
})

View File

@ -47,7 +47,7 @@ in
client1 = client1 =
{ pkgs, nodes, ... }: { pkgs, nodes, ... }:
{ environment.systemPackages = [ pkgs.miniupnpc pkgs.netcat ]; { environment.systemPackages = [ pkgs.miniupnpc_2 pkgs.netcat ];
virtualisation.vlans = [ 2 ]; virtualisation.vlans = [ 2 ];
networking.defaultGateway = internalRouterAddress; networking.defaultGateway = internalRouterAddress;
networking.interfaces.eth1.ipv4.addresses = [ networking.interfaces.eth1.ipv4.addresses = [
@ -63,7 +63,7 @@ in
client2 = client2 =
{ pkgs, ... }: { pkgs, ... }:
{ environment.systemPackages = [ pkgs.miniupnpc ]; { environment.systemPackages = [ pkgs.miniupnpc_2 ];
virtualisation.vlans = [ 1 ]; virtualisation.vlans = [ 1 ];
networking.interfaces.eth1.ipv4.addresses = [ networking.interfaces.eth1.ipv4.addresses = [
{ address = externalClient2Address; prefixLength = 24; } { address = externalClient2Address; prefixLength = 24; }

View File

@ -12,6 +12,12 @@ import ./make-test.nix ({ pkgs, ...} : {
enable = true; enable = true;
enableContribAndExtras = true; enableContribAndExtras = true;
extraPackages = with pkgs.haskellPackages; haskellPackages: [ xmobar ]; extraPackages = with pkgs.haskellPackages; haskellPackages: [ xmobar ];
config = ''
import XMonad
import XMonad.Util.EZConfig
main = launch $ def `additionalKeysP` myKeys
myKeys = [ ("M-C-x", spawn "xterm") ]
'';
}; };
}; };
@ -19,6 +25,10 @@ import ./make-test.nix ({ pkgs, ...} : {
$machine->waitForX; $machine->waitForX;
$machine->waitForFile("/home/alice/.Xauthority"); $machine->waitForFile("/home/alice/.Xauthority");
$machine->succeed("xauth merge ~alice/.Xauthority"); $machine->succeed("xauth merge ~alice/.Xauthority");
$machine->sendKeys("alt-ctrl-x");
$machine->waitForWindow(qr/machine.*alice/);
$machine->sleep(1);
$machine->screenshot("terminal");
$machine->waitUntilSucceeds("xmonad --restart"); $machine->waitUntilSucceeds("xmonad --restart");
$machine->sleep(3); $machine->sleep(3);
$machine->sendKeys("alt-shift-ret"); $machine->sendKeys("alt-shift-ret");

View File

@ -1,6 +1,6 @@
let let
version = "2.5.0"; version = "2.5.1";
sha256 = "1dsckybjg2cvrvcs1bya03xymcm0whfxcb1v0vljn5pghyazgvhx"; sha256 = "0nnrgc2qyqqld3znjigryqpg5jaqh3jnmin4a334dbr4jw50dz3d";
cargoSha256 = "0z7dmzpqg0qnkga7r4ykwrvz8ds1k9ik7cx58h2vnmhrhrddvizr"; cargoSha256 = "184vfhsalk5dims3k13zrsv4lmm45a7nm3r0b84g72q7hhbl8pkf";
in in
import ./parity.nix { inherit version sha256 cargoSha256; } import ./parity.nix { inherit version sha256 cargoSha256; }

View File

@ -1,6 +1,6 @@
let let
version = "2.4.5"; version = "2.4.6";
sha256 = "02ajwjw6cz86x6zybvw5l0pgv7r370hickjv9ja141w7bhl70q3v"; sha256 = "0vfq1pyd92n60h9gimn4d5j56xanvl43sgxk9h2kb16amy0mmh3z";
cargoSha256 = "1n218c43gf200xlb3q03bd6w4kas0jsqx6ciw9s6h7h18wwibvf1"; cargoSha256 = "04gi9vddahq1q207f83n3wriwdjnmmnby6mq4crdh7yx1p4b26m9";
in in
import ./parity.nix { inherit version sha256 cargoSha256; } import ./parity.nix { inherit version sha256 cargoSha256; }

View File

@ -7,12 +7,12 @@
with stdenv.lib; with stdenv.lib;
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "2.3.1"; version = "2.3.2";
name = "audacity-${version}"; name = "audacity-${version}";
src = fetchurl { src = fetchurl {
url = "https://github.com/audacity/audacity/archive/Audacity-${version}.tar.gz"; url = "https://github.com/audacity/audacity/archive/Audacity-${version}.tar.gz";
sha256 = "089kz6hgqg0caz33sps19wpkfnza5gf7brdq2p9y6bnwkipw1w9f"; sha256 = "0cf7fr1qhyyylj8g9ax1rq5sb887bcv5b8d7hwlcfwamzxqpliyc";
}; };
preConfigure = /* we prefer system-wide libs */ '' preConfigure = /* we prefer system-wide libs */ ''

View File

@ -0,0 +1,71 @@
{ stdenv, fetchFromGitHub, alsaLib, file, fluidsynth, ffmpeg, fftw, jack2,
liblo, libpulseaudio, libsndfile, makeWrapper, pkgconfig, python3Packages,
which, withFrontend ? true,
withQt ? true, qtbase ? null,
withGtk2 ? true, gtk2 ? null,
withGtk3 ? true, gtk3 ? null }:
with stdenv.lib;
assert withFrontend -> python3Packages ? pyqt5;
assert withQt -> qtbase != null;
assert withGtk2 -> gtk2 != null;
assert withGtk3 -> gtk3 != null;
stdenv.mkDerivation rec {
pname = "carla";
version = "2.0.0";
src = fetchFromGitHub {
owner = "falkTX";
repo = pname;
rev = "v${version}";
sha256 = "0fqgncqlr86n38yy7pa118mswfacmfczj7w9xx6c6k0jav3wk29k";
};
nativeBuildInputs = [ python3Packages.wrapPython pkgconfig which ];
pythonPath = with python3Packages; [
rdflib pyliblo
] ++ optional withFrontend pyqt5;
buildInputs = [
file liblo alsaLib fluidsynth ffmpeg jack2 libpulseaudio libsndfile
] ++ pythonPath
++ optional withQt qtbase
++ optional withGtk2 gtk2
++ optional withGtk3 gtk3;
installFlags = [ "PREFIX=$(out)" ];
postFixup = ''
# Also sets program_PYTHONPATH and program_PATH variables
wrapPythonPrograms
find "$out/share/carla" -maxdepth 1 -type f -not -name "*.py" -print0 | while read -d "" f; do
patchPythonScript "$f"
done
patchPythonScript "$out/share/carla/carla_settings.py"
for program in $out/bin/*; do
wrapProgram "$program" \
--prefix PATH : "$program_PATH:${which}/bin" \
--set PYTHONNOUSERSITE true \
--prefix QT_PLUGIN_PATH : "${qtbase.bin}/${qtbase.qtPluginPrefix}"
done
'';
meta = with stdenv.lib; {
homepage = http://kxstudio.sf.net/carla;
description = "An audio plugin host";
longDescription = ''
It currently supports LADSPA (including LRDF), DSSI, LV2, VST2/3
and AU plugin formats, plus GIG, SF2 and SFZ file support.
It uses JACK as the default and preferred audio driver but also
supports native drivers like ALSA, DirectSound or CoreAudio.
'';
license = licenses.gpl2Plus;
maintainers = [ maintainers.minijackson ];
platforms = platforms.linux;
};
}

View File

@ -21,7 +21,7 @@
python3.pkgs.buildPythonApplication rec { python3.pkgs.buildPythonApplication rec {
pname = "lollypop"; pname = "lollypop";
version = "1.0.7"; version = "1.0.10";
format = "other"; format = "other";
doCheck = false; doCheck = false;
@ -30,7 +30,7 @@ python3.pkgs.buildPythonApplication rec {
url = "https://gitlab.gnome.org/World/lollypop"; url = "https://gitlab.gnome.org/World/lollypop";
rev = "refs/tags/${version}"; rev = "refs/tags/${version}";
fetchSubmodules = true; fetchSubmodules = true;
sha256 = "0gdds4qssn32axsa5janqny5i4426azj5wyj6bzn026zs3z38svn"; sha256 = "118z1qhvpv7x5n63lpm4mf81pmv7gd450sa55i68mnjvry93h9h5";
}; };
nativeBuildInputs = [ nativeBuildInputs = [
@ -59,10 +59,8 @@ python3.pkgs.buildPythonApplication rec {
propagatedBuildInputs = with python3.pkgs; [ propagatedBuildInputs = with python3.pkgs; [
beautifulsoup4 beautifulsoup4
gst-python
pillow pillow
pycairo pycairo
pydbus
pygobject3 pygobject3
] ]
++ lib.optional lastFMSupport pylast ++ lib.optional lastFMSupport pylast
@ -84,6 +82,7 @@ python3.pkgs.buildPythonApplication rec {
description = "A modern music player for GNOME"; description = "A modern music player for GNOME";
homepage = https://wiki.gnome.org/Apps/Lollypop; homepage = https://wiki.gnome.org/Apps/Lollypop;
license = licenses.gpl3Plus; license = licenses.gpl3Plus;
changelog = "https://gitlab.gnome.org/World/lollypop/tags/${version}";
maintainers = with maintainers; [ worldofpeace ]; maintainers = with maintainers; [ worldofpeace ];
platforms = platforms.linux; platforms = platforms.linux;
}; };

View File

@ -1,9 +1,35 @@
{ stdenv, fetchurl, ncurses, pkgconfig, alsaLib, flac, libmad, speex, ffmpeg { stdenv, fetchurl, pkgconfig
, libvorbis, libmpc, libsndfile, libjack2, db, libmodplug, timidity, libid3tag , ncurses, db , popt, libtool
, libtool # Sound sub-systems
, alsaSupport ? true, alsaLib
, pulseSupport ? true, libpulseaudio, autoreconfHook
, jackSupport ? true, libjack2
, ossSupport ? true
# Audio formats
, aacSupport ? true, faad2, libid3tag
, flacSupport ? true, flac
, midiSupport ? true, timidity
, modplugSupport ? true, libmodplug
, mp3Support ? true, libmad
, musepackSupport ? true, libmpc, libmpcdec, taglib
, vorbisSupport ? true, libvorbis
, speexSupport ? true, speex
, ffmpegSupport ? true, ffmpeg
, sndfileSupport ? true, libsndfile
, wavpackSupport ? true, wavpack
# Misc
, withffmpeg4 ? false, ffmpeg_4
, curlSupport ? true, curl
, samplerateSupport ? true, libsamplerate
, withDebug ? false
}: }:
stdenv.mkDerivation rec { let
opt = stdenv.lib.optional;
mkFlag = c: f: if c then "--with-${f}" else "--without-${f}";
in stdenv.mkDerivation rec {
name = "moc-${version}"; name = "moc-${version}";
version = "2.5.2"; version = "2.5.2";
@ -12,18 +38,67 @@ stdenv.mkDerivation rec {
sha256 = "026v977kwb0wbmlmf6mnik328plxg8wykfx9ryvqhirac0aq39pk"; sha256 = "026v977kwb0wbmlmf6mnik328plxg8wykfx9ryvqhirac0aq39pk";
}; };
nativeBuildInputs = [ pkgconfig ]; patches = []
++ opt withffmpeg4 ./moc-ffmpeg4.patch
++ opt pulseSupport ./pulseaudio.patch;
buildInputs = [ nativeBuildInputs = [ pkgconfig ]
ncurses alsaLib flac libmad speex ffmpeg libvorbis libmpc libsndfile libjack2 ++ opt pulseSupport autoreconfHook;
db libmodplug timidity libid3tag libtool
buildInputs = [ ncurses db popt libtool ]
# Sound sub-systems
++ opt alsaSupport alsaLib
++ opt pulseSupport libpulseaudio
++ opt jackSupport libjack2
# Audio formats
++ opt (aacSupport || mp3Support) libid3tag
++ opt aacSupport faad2
++ opt flacSupport flac
++ opt midiSupport timidity
++ opt modplugSupport libmodplug
++ opt mp3Support libmad
++ opt musepackSupport [ libmpc libmpcdec taglib ]
++ opt vorbisSupport libvorbis
++ opt speexSupport speex
++ opt (ffmpegSupport && !withffmpeg4) ffmpeg
++ opt (ffmpegSupport && withffmpeg4) ffmpeg_4
++ opt sndfileSupport libsndfile
++ opt wavpackSupport wavpack
# Misc
++ opt curlSupport curl
++ opt samplerateSupport libsamplerate;
configureFlags = [
# Sound sub-systems
(mkFlag alsaSupport "alsa")
(mkFlag pulseSupport "pulse")
(mkFlag jackSupport "jack")
(mkFlag ossSupport "oss")
# Audio formats
(mkFlag aacSupport "aac")
(mkFlag flacSupport "flac")
(mkFlag midiSupport "timidity")
(mkFlag modplugSupport "modplug")
(mkFlag mp3Support "mp3")
(mkFlag musepackSupport "musepack")
(mkFlag vorbisSupport "vorbis")
(mkFlag speexSupport "speex")
(mkFlag ffmpegSupport "ffmpeg")
(mkFlag sndfileSupport "sndfile")
(mkFlag wavpackSupport "wavpack")
# Misc
(mkFlag curlSupport "curl")
(mkFlag samplerateSupport "samplerate")
("--enable-debug=" + (if withDebug then "yes" else "no"))
"--disable-cache"
"--without-rcc"
]; ];
meta = with stdenv.lib; { meta = with stdenv.lib; {
description = "An ncurses console audio player designed to be powerful and easy to use"; description = "An ncurses console audio player designed to be powerful and easy to use";
homepage = http://moc.daper.net/; homepage = http://moc.daper.net/;
license = licenses.gpl2; license = licenses.gpl2;
maintainers = with maintainers; [ pSub jagajaga ]; maintainers = with maintainers; [ aethelz pSub jagajaga ];
platforms = platforms.linux; platforms = platforms.linux;
}; };
} }

View File

@ -0,0 +1,33 @@
Index: decoder_plugins/ffmpeg/ffmpeg.c
===================================================================
--- /decoder_plugins/ffmpeg/ffmpeg.c (revisión: 2963)
+++ /decoder_plugins/ffmpeg/ffmpeg.c (copia de trabajo)
@@ -697,7 +697,7 @@
* FFmpeg/LibAV in use. For some versions this will be caught in
* *_find_stream_info() above and misreported as an unfound codec
* parameters error. */
- if (data->codec->capabilities & CODEC_CAP_EXPERIMENTAL) {
+ if (data->codec->capabilities & AV_CODEC_CAP_EXPERIMENTAL) {
decoder_error (&data->error, ERROR_FATAL, 0,
"The codec is experimental and may damage MOC: %s",
data->codec->name);
@@ -705,8 +705,8 @@
}
set_downmixing (data);
- if (data->codec->capabilities & CODEC_CAP_TRUNCATED)
- data->enc->flags |= CODEC_FLAG_TRUNCATED;
+ if (data->codec->capabilities & AV_CODEC_CAP_TRUNCATED)
+ data->enc->flags |= AV_CODEC_FLAG_TRUNCATED;
if (avcodec_open2 (data->enc, data->codec, NULL) < 0)
{
@@ -725,7 +725,7 @@
data->sample_width = sfmt_Bps (data->fmt);
- if (data->codec->capabilities & CODEC_CAP_DELAY)
+ if (data->codec->capabilities & AV_CODEC_CAP_DELAY)
data->delay = true;
data->seek_broken = is_seek_broken (data);
data->timing_broken = is_timing_broken (data->ic);

View File

@ -0,0 +1,800 @@
diff --git a/audio.c b/audio.c
--- a/audio.c
+++ b/audio.c
@@ -32,6 +32,9 @@
#include "log.h"
#include "lists.h"
+#ifdef HAVE_PULSE
+# include "pulse.h"
+#endif
#ifdef HAVE_OSS
# include "oss.h"
#endif
@@ -893,6 +896,15 @@
}
#endif
+#ifdef HAVE_PULSE
+ if (!strcasecmp(name, "pulseaudio")) {
+ pulse_funcs (funcs);
+ printf ("Trying PulseAudio...\n");
+ if (funcs->init(&hw_caps))
+ return;
+ }
+#endif
+
#ifdef HAVE_OSS
if (!strcasecmp(name, "oss")) {
oss_funcs (funcs);
diff --git a/configure.in b/configure.in
--- a/configure.in
+++ b/configure.in
@@ -162,6 +162,21 @@
AC_MSG_ERROR([BerkeleyDB (libdb) not found.]))
fi
+AC_ARG_WITH(pulse, AS_HELP_STRING(--without-pulse,
+ Compile without PulseAudio support.))
+
+if test "x$with_pulse" != "xno"
+then
+ PKG_CHECK_MODULES(PULSE, [libpulse],
+ [SOUND_DRIVERS="$SOUND_DRIVERS PULSE"
+ EXTRA_OBJS="$EXTRA_OBJS pulse.o"
+ AC_DEFINE([HAVE_PULSE], 1, [Define if you have PulseAudio.])
+ EXTRA_LIBS="$EXTRA_LIBS $PULSE_LIBS"
+ CFLAGS="$CFLAGS $PULSE_CFLAGS"],
+ [true])
+fi
+
+
AC_ARG_WITH(oss, AS_HELP_STRING([--without-oss],
[Compile without OSS support]))
diff --git a/options.c b/options.c
--- a/options.c
+++ b/options.c
@@ -572,10 +572,11 @@
#ifdef OPENBSD
add_list ("SoundDriver", "SNDIO:JACK:OSS",
- CHECK_DISCRETE(5), "SNDIO", "Jack", "ALSA", "OSS", "null");
+ CHECK_DISCRETE(5), "SNDIO", "PulseAudio", "Jack", "ALSA", "OSS", "null");
+
#else
add_list ("SoundDriver", "Jack:ALSA:OSS",
- CHECK_DISCRETE(5), "SNDIO", "Jack", "ALSA", "OSS", "null");
+ CHECK_DISCRETE(5), "SNDIO", "PulseAudio", "Jack", "ALSA", "OSS", "null");
#endif
add_str ("JackClientName", "moc", CHECK_NONE);
diff --git a/pulse.c b/pulse.c
new file mode 100644
--- /dev/null
+++ b/pulse.c
@@ -0,0 +1,705 @@
+/*
+ * MOC - music on console
+ * Copyright (C) 2011 Marien Zwart <marienz@marienz.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+
+/* PulseAudio backend.
+ *
+ * FEATURES:
+ *
+ * Does not autostart a PulseAudio server, but uses an already-started
+ * one, which should be better than alsa-through-pulse.
+ *
+ * Supports control of either our stream's or our entire sink's volume
+ * while we are actually playing. Volume control while paused is
+ * intentionally unsupported: the PulseAudio documentation strongly
+ * suggests not passing in an initial volume when creating a stream
+ * (allowing the server to track this instead), and we do not know
+ * which sink to control if we do not have a stream open.
+ *
+ * IMPLEMENTATION:
+ *
+ * Most client-side (resource allocation) errors are fatal. Failure to
+ * create a server context or stream is not fatal (and MOC should cope
+ * with these failures too), but server communication failures later
+ * on are currently not handled (MOC has no great way for us to tell
+ * it we no longer work, and I am not sure if attempting to reconnect
+ * is worth it or even a good idea).
+ *
+ * The pulse "simple" API is too simple: it combines connecting to the
+ * server and opening a stream into one operation, while I want to
+ * connect to the server when MOC starts (and fall back to a different
+ * backend if there is no server), and I cannot open a stream at that
+ * time since I do not know the audio format yet.
+ *
+ * PulseAudio strongly recommends we use a high-latency connection,
+ * which the MOC frontend code might not expect from its audio
+ * backend. We'll see.
+ *
+ * We map MOC's percentage volumes linearly to pulse's PA_VOLUME_MUTED
+ * (0) .. PA_VOLUME_NORM range. This is what the PulseAudio docs recommend
+ * ( http://pulseaudio.org/wiki/WritingVolumeControlUIs ). It does mean
+ * PulseAudio volumes above PA_VOLUME_NORM do not work well with MOC.
+ *
+ * Comments in audio.h claim "All functions are executed only by one
+ * thread" (referring to the function in the hw_funcs struct). This is
+ * a blatant lie. Most of them are invoked off the "output buffer"
+ * thread (out_buf.c) but at least the "playing" thread (audio.c)
+ * calls audio_close which calls our close function. We can mostly
+ * ignore this problem because we serialize on the pulseaudio threaded
+ * mainloop lock. But it does mean that functions that are normally
+ * only called between open and close (like reset) are sometimes
+ * called without us having a stream. Bulletproof, therefore:
+ * serialize setting/unsetting our global stream using the threaded
+ * mainloop lock, and check for that stream being non-null before
+ * using it.
+ *
+ * I am not convinced there are no further dragons lurking here: can
+ * the "playing" thread(s) close and reopen our output stream while
+ * the "output buffer" thread is sending output there? We can bail if
+ * our stream is simply closed, but we do not currently detect it
+ * being reopened and no longer using the same sample format, which
+ * might have interesting results...
+ *
+ * Also, read_mixer is called from the main server thread (handling
+ * commands). This crashed me once when it got at a stream that was in
+ * the "creating" state and therefore did not have a valid stream
+ * index yet. Fixed by only assigning to the stream global when the
+ * stream is valid.
+ */
+
+#ifdef HAVE_CONFIG_H
+# include "config.h"
+#endif
+
+#define DEBUG
+
+#include <pulse/pulseaudio.h>
+#include "common.h"
+#include "log.h"
+#include "audio.h"
+
+
+/* The pulse mainloop and context are initialized in pulse_init and
+ * destroyed in pulse_shutdown.
+ */
+static pa_threaded_mainloop *mainloop = NULL;
+static pa_context *context = NULL;
+
+/* The stream is initialized in pulse_open and destroyed in pulse_close. */
+static pa_stream *stream = NULL;
+
+static int showing_sink_volume = 0;
+
+/* Callbacks that do nothing but wake up the mainloop. */
+
+static void context_state_callback (pa_context *context ATTR_UNUSED,
+ void *userdata)
+{
+ pa_threaded_mainloop *m = userdata;
+
+ pa_threaded_mainloop_signal (m, 0);
+}
+
+static void stream_state_callback (pa_stream *stream ATTR_UNUSED,
+ void *userdata)
+{
+ pa_threaded_mainloop *m = userdata;
+
+ pa_threaded_mainloop_signal (m, 0);
+}
+
+static void stream_write_callback (pa_stream *stream ATTR_UNUSED,
+ size_t nbytes ATTR_UNUSED, void *userdata)
+{
+ pa_threaded_mainloop *m = userdata;
+
+ pa_threaded_mainloop_signal (m, 0);
+}
+
+/* Initialize pulse mainloop and context. Failure to connect to the
+ * pulse daemon is nonfatal, everything else is fatal (as it
+ * presumably means we ran out of resources).
+ */
+static int pulse_init (struct output_driver_caps *caps)
+{
+ pa_context *c;
+ pa_proplist *proplist;
+
+ assert (!mainloop);
+ assert (!context);
+
+ mainloop = pa_threaded_mainloop_new ();
+ if (!mainloop)
+ fatal ("Cannot create PulseAudio mainloop");
+
+ if (pa_threaded_mainloop_start (mainloop) < 0)
+ fatal ("Cannot start PulseAudio mainloop");
+
+ /* TODO: possibly add more props.
+ *
+ * There are a few we could set in proplist.h but nothing I
+ * expect to be very useful.
+ *
+ * http://pulseaudio.org/wiki/ApplicationProperties recommends
+ * setting at least application.name, icon.name and media.role.
+ *
+ * No need to set application.name here, the name passed to
+ * pa_context_new_with_proplist overrides it.
+ */
+ proplist = pa_proplist_new ();
+ if (!proplist)
+ fatal ("Cannot allocate PulseAudio proplist");
+
+ pa_proplist_sets (proplist,
+ PA_PROP_APPLICATION_VERSION, PACKAGE_VERSION);
+ pa_proplist_sets (proplist, PA_PROP_MEDIA_ROLE, "music");
+ pa_proplist_sets (proplist, PA_PROP_APPLICATION_ID, "net.daper.moc");
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ c = pa_context_new_with_proplist (
+ pa_threaded_mainloop_get_api (mainloop),
+ PACKAGE_NAME, proplist);
+ pa_proplist_free (proplist);
+
+ if (!c)
+ fatal ("Cannot allocate PulseAudio context");
+
+ pa_context_set_state_callback (c, context_state_callback, mainloop);
+
+ /* Ignore return value, rely on state being set properly */
+ pa_context_connect (c, NULL, PA_CONTEXT_NOAUTOSPAWN, NULL);
+
+ while (1) {
+ pa_context_state_t state = pa_context_get_state (c);
+
+ if (state == PA_CONTEXT_READY)
+ break;
+
+ if (!PA_CONTEXT_IS_GOOD (state)) {
+ error ("PulseAudio connection failed: %s",
+ pa_strerror (pa_context_errno (c)));
+
+ goto unlock_and_fail;
+ }
+
+ debug ("waiting for context to become ready...");
+ pa_threaded_mainloop_wait (mainloop);
+ }
+
+ /* Only set the global now that the context is actually ready */
+ context = c;
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ /* We just make up the hardware capabilities, since pulse is
+ * supposed to be abstracting these out. Assume pulse will
+ * deal with anything we want to throw at it, and that we will
+ * only want mono or stereo audio.
+ */
+ caps->min_channels = 1;
+ caps->max_channels = 2;
+ caps->formats = (SFMT_S8 | SFMT_S16 | SFMT_S32 |
+ SFMT_FLOAT | SFMT_BE | SFMT_LE);
+
+ return 1;
+
+unlock_and_fail:
+
+ pa_context_unref (c);
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ pa_threaded_mainloop_stop (mainloop);
+ pa_threaded_mainloop_free (mainloop);
+ mainloop = NULL;
+
+ return 0;
+}
+
+static void pulse_shutdown (void)
+{
+ pa_threaded_mainloop_lock (mainloop);
+
+ pa_context_disconnect (context);
+ pa_context_unref (context);
+ context = NULL;
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ pa_threaded_mainloop_stop (mainloop);
+ pa_threaded_mainloop_free (mainloop);
+ mainloop = NULL;
+}
+
+static int pulse_open (struct sound_params *sound_params)
+{
+ pa_sample_spec ss;
+ pa_buffer_attr ba;
+ pa_stream *s;
+
+ assert (!stream);
+ /* Initialize everything to -1, which in practice gets us
+ * about 2 seconds of latency (which is fine). This is not the
+ * same as passing NULL for this struct, which gets us an
+ * unnecessarily short alsa-like latency.
+ */
+ ba.fragsize = (uint32_t) -1;
+ ba.tlength = (uint32_t) -1;
+ ba.prebuf = (uint32_t) -1;
+ ba.minreq = (uint32_t) -1;
+ ba.maxlength = (uint32_t) -1;
+
+ ss.channels = sound_params->channels;
+ ss.rate = sound_params->rate;
+ switch (sound_params->fmt) {
+ case SFMT_U8:
+ ss.format = PA_SAMPLE_U8;
+ break;
+ case SFMT_S16 | SFMT_LE:
+ ss.format = PA_SAMPLE_S16LE;
+ break;
+ case SFMT_S16 | SFMT_BE:
+ ss.format = PA_SAMPLE_S16BE;
+ break;
+ case SFMT_FLOAT | SFMT_LE:
+ ss.format = PA_SAMPLE_FLOAT32LE;
+ break;
+ case SFMT_FLOAT | SFMT_BE:
+ ss.format = PA_SAMPLE_FLOAT32BE;
+ break;
+ case SFMT_S32 | SFMT_LE:
+ ss.format = PA_SAMPLE_S32LE;
+ break;
+ case SFMT_S32 | SFMT_BE:
+ ss.format = PA_SAMPLE_S32BE;
+ break;
+
+ default:
+ fatal ("pulse: got unrequested format");
+ }
+
+ debug ("opening stream");
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ /* TODO: figure out if there are useful stream properties to set.
+ *
+ * I do not really see any in proplist.h that we can set from
+ * here (there are media title/artist/etc props but we do not
+ * have that data available here).
+ */
+ s = pa_stream_new (context, "music", &ss, NULL);
+ if (!s)
+ fatal ("pulse: stream allocation failed");
+
+ pa_stream_set_state_callback (s, stream_state_callback, mainloop);
+ pa_stream_set_write_callback (s, stream_write_callback, mainloop);
+
+ /* Ignore return value, rely on failed stream state instead. */
+ pa_stream_connect_playback (
+ s, NULL, &ba,
+ PA_STREAM_INTERPOLATE_TIMING |
+ PA_STREAM_AUTO_TIMING_UPDATE |
+ PA_STREAM_ADJUST_LATENCY,
+ NULL, NULL);
+
+ while (1) {
+ pa_stream_state_t state = pa_stream_get_state (s);
+
+ if (state == PA_STREAM_READY)
+ break;
+
+ if (!PA_STREAM_IS_GOOD (state)) {
+ error ("PulseAudio stream connection failed");
+
+ goto fail;
+ }
+
+ debug ("waiting for stream to become ready...");
+ pa_threaded_mainloop_wait (mainloop);
+ }
+
+ /* Only set the global stream now that it is actually ready */
+ stream = s;
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ return 1;
+
+fail:
+ pa_stream_unref (s);
+
+ pa_threaded_mainloop_unlock (mainloop);
+ return 0;
+}
+
+static void pulse_close (void)
+{
+ debug ("closing stream");
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ pa_stream_disconnect (stream);
+ pa_stream_unref (stream);
+ stream = NULL;
+
+ pa_threaded_mainloop_unlock (mainloop);
+}
+
+static int pulse_play (const char *buff, const size_t size)
+{
+ size_t offset = 0;
+
+ debug ("Got %d bytes to play", (int)size);
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ /* The buffer is usually writable when we get here, and there
+ * are usually few (if any) writes after the first one. So
+ * there is no point in doing further writes directly from the
+ * callback: we can just do all writes from this thread.
+ */
+
+ /* Break out of the loop if some other thread manages to close
+ * our stream underneath us.
+ */
+ while (stream) {
+ size_t towrite = MIN(pa_stream_writable_size (stream),
+ size - offset);
+ debug ("writing %d bytes", (int)towrite);
+
+ /* We have no working way of dealing with errors
+ * (see below). */
+ if (pa_stream_write(stream, buff + offset, towrite,
+ NULL, 0, PA_SEEK_RELATIVE))
+ error ("pa_stream_write failed");
+
+ offset += towrite;
+
+ if (offset >= size)
+ break;
+
+ pa_threaded_mainloop_wait (mainloop);
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ debug ("Done playing!");
+
+ /* We should always return size, calling code does not deal
+ * well with anything else. Only read the rest if you want to
+ * know why.
+ *
+ * The output buffer reader thread (out_buf.c:read_thread)
+ * repeatedly loads some 64k/0.1s of audio into a buffer on
+ * the stack, then calls audio_send_pcm repeatedly until this
+ * entire buffer has been processed (similar to the loop in
+ * this function). audio_send_pcm applies the softmixer and
+ * equalizer, then feeds the result to this function, passing
+ * through our return value.
+ *
+ * So if we return less than size the equalizer/softmixer is
+ * re-applied to the remaining data, which is silly. Also,
+ * audio_send_pcm checks for our return value being zero and
+ * calls fatal() if it is, so try to always process *some*
+ * data. Also, out_buf.c uses the return value of this
+ * function from the last run through its inner loop to update
+ * its time attribute, which means it will be interestingly
+ * off if that loop ran more than once.
+ *
+ * Oh, and alsa.c seems to think it can return -1 to indicate
+ * failure, which will cause out_buf.c to rewind its buffer
+ * (to before its start, usually).
+ */
+ return size;
+}
+
+static void volume_cb (const pa_cvolume *v, void *userdata)
+{
+ int *result = userdata;
+
+ if (v)
+ *result = 100 * pa_cvolume_avg (v) / PA_VOLUME_NORM;
+
+ pa_threaded_mainloop_signal (mainloop, 0);
+}
+
+static void sink_volume_cb (pa_context *c ATTR_UNUSED,
+ const pa_sink_info *i, int eol ATTR_UNUSED,
+ void *userdata)
+{
+ volume_cb (i ? &i->volume : NULL, userdata);
+}
+
+static void sink_input_volume_cb (pa_context *c ATTR_UNUSED,
+ const pa_sink_input_info *i,
+ int eol ATTR_UNUSED,
+ void *userdata ATTR_UNUSED)
+{
+ volume_cb (i ? &i->volume : NULL, userdata);
+}
+
+static int pulse_read_mixer (void)
+{
+ pa_operation *op;
+ int result = 0;
+
+ debug ("read mixer");
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ if (stream) {
+ if (showing_sink_volume)
+ op = pa_context_get_sink_info_by_index (
+ context, pa_stream_get_device_index (stream),
+ sink_volume_cb, &result);
+ else
+ op = pa_context_get_sink_input_info (
+ context, pa_stream_get_index (stream),
+ sink_input_volume_cb, &result);
+
+ while (pa_operation_get_state (op) == PA_OPERATION_RUNNING)
+ pa_threaded_mainloop_wait (mainloop);
+
+ pa_operation_unref (op);
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ return result;
+}
+
+static void pulse_set_mixer (int vol)
+{
+ pa_cvolume v;
+ pa_operation *op;
+
+ /* Setting volume for one channel does the right thing. */
+ pa_cvolume_set(&v, 1, vol * PA_VOLUME_NORM / 100);
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ if (stream) {
+ if (showing_sink_volume)
+ op = pa_context_set_sink_volume_by_index (
+ context, pa_stream_get_device_index (stream),
+ &v, NULL, NULL);
+ else
+ op = pa_context_set_sink_input_volume (
+ context, pa_stream_get_index (stream),
+ &v, NULL, NULL);
+
+ pa_operation_unref (op);
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+}
+
+static int pulse_get_buff_fill (void)
+{
+ /* This function is problematic. MOC uses it to for the "time
+ * remaining" in the UI, but calls it more than once per
+ * second (after each chunk of audio played, not for each
+ * playback time update). We have to be fairly accurate here
+ * for that time remaining to not jump weirdly. But PulseAudio
+ * cannot give us a 100% accurate value here, as it involves a
+ * server roundtrip. And if we call this a lot it suggests
+ * switching to a mode where the value is interpolated, making
+ * it presumably more inaccurate (see the flags we pass to
+ * pa_stream_connect_playback).
+ *
+ * MOC also contains what I believe to be a race: it calls
+ * audio_get_buff_fill "soon" (after playing the first chunk)
+ * after starting playback of the next song, at which point we
+ * still have part of the previous song buffered. This means
+ * our position into the new song is negative, which fails an
+ * assert (in out_buf.c:out_buf_time_get). There is no sane
+ * way for us to detect this condition. I believe no other
+ * backend triggers this because the assert sits after an
+ * implicit float -> int seconds conversion, which means we
+ * have to be off by at least an entire second to get a
+ * negative value, and none of the other backends have buffers
+ * that large (alsa buffers are supposedly a few 100 ms).
+ */
+ pa_usec_t buffered_usecs = 0;
+ int buffered_bytes = 0;
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ /* Using pa_stream_get_timing_info and returning the distance
+ * between write_index and read_index would be more obvious,
+ * but because of how the result is actually used I believe
+ * using the latency value is slightly more correct, and it
+ * makes the following crash-avoidance hack more obvious.
+ */
+
+ /* This function will frequently fail the first time we call
+ * it (pulse does not have the requested data yet). We ignore
+ * that and just return 0.
+ *
+ * Deal with stream being NULL too, just in case this is
+ * called in a racy fashion similar to how reset() is.
+ */
+ if (stream &&
+ pa_stream_get_latency (stream, &buffered_usecs, NULL) >= 0) {
+ /* Crash-avoidance HACK: floor our latency to at most
+ * 1 second. It is usually more, but reporting that at
+ * the start of playback crashes MOC, and we cannot
+ * sanely detect when reporting it is safe.
+ */
+ if (buffered_usecs > 1000000)
+ buffered_usecs = 1000000;
+
+ buffered_bytes = pa_usec_to_bytes (
+ buffered_usecs,
+ pa_stream_get_sample_spec (stream));
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ debug ("buffer fill: %d usec / %d bytes",
+ (int) buffered_usecs, (int) buffered_bytes);
+
+ return buffered_bytes;
+}
+
+static void flush_callback (pa_stream *s ATTR_UNUSED, int success,
+ void *userdata)
+{
+ int *result = userdata;
+
+ *result = success;
+
+ pa_threaded_mainloop_signal (mainloop, 0);
+}
+
+static int pulse_reset (void)
+{
+ pa_operation *op;
+ int result = 0;
+
+ debug ("reset requested");
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ /* We *should* have a stream here, but MOC is racy, so bulletproof */
+ if (stream) {
+ op = pa_stream_flush (stream, flush_callback, &result);
+
+ while (pa_operation_get_state (op) == PA_OPERATION_RUNNING)
+ pa_threaded_mainloop_wait (mainloop);
+
+ pa_operation_unref (op);
+ } else
+ logit ("pulse_reset() called without a stream");
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ return result;
+}
+
+static int pulse_get_rate (void)
+{
+ /* This is called once right after open. Do not bother making
+ * this fast. */
+
+ int result;
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ if (stream)
+ result = pa_stream_get_sample_spec (stream)->rate;
+ else {
+ error ("get_rate called without a stream");
+ result = 0;
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ return result;
+}
+
+static void pulse_toggle_mixer_channel (void)
+{
+ showing_sink_volume = !showing_sink_volume;
+}
+
+static void sink_name_cb (pa_context *c ATTR_UNUSED,
+ const pa_sink_info *i, int eol ATTR_UNUSED,
+ void *userdata)
+{
+ char **result = userdata;
+
+ if (i && !*result)
+ *result = xstrdup (i->name);
+
+ pa_threaded_mainloop_signal (mainloop, 0);
+}
+
+static void sink_input_name_cb (pa_context *c ATTR_UNUSED,
+ const pa_sink_input_info *i,
+ int eol ATTR_UNUSED,
+ void *userdata)
+{
+ char **result = userdata;
+
+ if (i && !*result)
+ *result = xstrdup (i->name);
+
+ pa_threaded_mainloop_signal (mainloop, 0);
+}
+
+static char *pulse_get_mixer_channel_name (void)
+{
+ char *result = NULL;
+ pa_operation *op;
+
+ pa_threaded_mainloop_lock (mainloop);
+
+ if (stream) {
+ if (showing_sink_volume)
+ op = pa_context_get_sink_info_by_index (
+ context, pa_stream_get_device_index (stream),
+ sink_name_cb, &result);
+ else
+ op = pa_context_get_sink_input_info (
+ context, pa_stream_get_index (stream),
+ sink_input_name_cb, &result);
+
+ while (pa_operation_get_state (op) == PA_OPERATION_RUNNING)
+ pa_threaded_mainloop_wait (mainloop);
+
+ pa_operation_unref (op);
+ }
+
+ pa_threaded_mainloop_unlock (mainloop);
+
+ if (!result)
+ result = xstrdup ("disconnected");
+
+ return result;
+}
+
+void pulse_funcs (struct hw_funcs *funcs)
+{
+ funcs->init = pulse_init;
+ funcs->shutdown = pulse_shutdown;
+ funcs->open = pulse_open;
+ funcs->close = pulse_close;
+ funcs->play = pulse_play;
+ funcs->read_mixer = pulse_read_mixer;
+ funcs->set_mixer = pulse_set_mixer;
+ funcs->get_buff_fill = pulse_get_buff_fill;
+ funcs->reset = pulse_reset;
+ funcs->get_rate = pulse_get_rate;
+ funcs->toggle_mixer_channel = pulse_toggle_mixer_channel;
+ funcs->get_mixer_channel_name = pulse_get_mixer_channel_name;
+}
diff --git a/pulse.h b/pulse.h
new file mode 100644
--- /dev/null
+++ b/pulse.h
@@ -0,0 +1,14 @@
+#ifndef PULSE_H
+#define PULSE_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void pulse_funcs (struct hw_funcs *funcs);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif

View File

@ -19,7 +19,7 @@ stdenv.mkDerivation rec {
src = src =
if stdenv.hostPlatform.system == "x86_64-linux" then if stdenv.hostPlatform.system == "x86_64-linux" then
if builtins.isNull releasePath then if releasePath == null then
fetchurl { fetchurl {
url = "https://files.renoise.com/demo/Renoise_${urlVersion version}_Demo_x86_64.tar.bz2"; url = "https://files.renoise.com/demo/Renoise_${urlVersion version}_Demo_x86_64.tar.bz2";
sha256 = "0pan68fr22xbj7a930y29527vpry3f07q3i9ya4fp6g7aawffsga"; sha256 = "0pan68fr22xbj7a930y29527vpry3f07q3i9ya4fp6g7aawffsga";
@ -27,7 +27,7 @@ stdenv.mkDerivation rec {
else else
releasePath releasePath
else if stdenv.hostPlatform.system == "i686-linux" then else if stdenv.hostPlatform.system == "i686-linux" then
if builtins.isNull releasePath then if releasePath == null then
fetchurl { fetchurl {
url = "http://files.renoise.com/demo/Renoise_${urlVersion version}_Demo_x86.tar.bz2"; url = "http://files.renoise.com/demo/Renoise_${urlVersion version}_Demo_x86.tar.bz2";
sha256 = "1lccjj4k8hpqqxxham5v01v2rdwmx3c5kgy1p9lqvzqma88k4769"; sha256 = "1lccjj4k8hpqqxxham5v01v2rdwmx3c5kgy1p9lqvzqma88k4769";

Some files were not shown because too many files have changed in this diff Show More