Merge remote-tracking branch 'u/staging' into openmoji

This commit is contained in:
Kevin Cox 2021-08-13 21:08:58 +00:00
commit f5e552ec01
No known key found for this signature in database
GPG Key ID: 9BB92CC1552E99AA
2505 changed files with 80716 additions and 31193 deletions

View File

@ -21,9 +21,9 @@ Reviewing guidelines: https://nixos.org/manual/nixpkgs/unstable/#chap-reviewing-
- [ ] macOS
- [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [21.11 Release Notes (or backporting 21.05 Relase notes)](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#generating-2111-release-notes)
- [21.11 Release Notes (or backporting 21.05 Release notes)](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#generating-2111-release-notes)
- [ ] (Package updates) Added a release notes entry if the change is major or breaking
- [ ] (Module updates) Added a release notes entry if the change is significant
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module

1
.github/labeler.yml vendored
View File

@ -70,6 +70,7 @@
"6.topic: nixos":
- nixos/**/*
- pkgs/os-specific/linux/nixos-rebuild/**/*
"6.topic: ocaml":
- doc/languages-frameworks/ocaml.section.md

View File

@ -1,7 +1,7 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](../COPYING), which is an MIT-like license.
under the terms of [COPYING](COPYING), which is an MIT-like license.
## Opening issues

View File

@ -97,7 +97,8 @@ Foundation](https://nixos.org/nixos/foundation.html). To ensure the
continuity and expansion of the NixOS infrastructure, we are looking
for donations to our organization.
You can donate to the NixOS foundation by using Open Collective:
You can donate to the NixOS foundation through [SEPA bank
transfers](https://nixos.org/donate.html) or by using Open Collective:
<a href="https://opencollective.com/nixos#support"><img src="https://opencollective.com/nixos/tiers/supporter.svg?width=890" /></a>

View File

@ -110,7 +110,7 @@ overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesFor emacs).overrideScope' overrides).emacs.pkgs.withPackages
((emacsPackagesFor emacs).overrideScope' overrides).withPackages
(p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod

View File

@ -520,7 +520,7 @@ If you do need to do create this sort of patch file, one way to do so is with gi
4. Use git to create a diff, and pipe the output to a patch file:
```ShellSession
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch
$ git diff -a > nixpkgs/pkgs/the/package/0001-changes.patch
```
If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`:
@ -537,7 +537,13 @@ Note that because the checksum is computed after applying these effects, using o
Tests are important to ensure quality and make reviews and automatic updates easy.
Nix package tests are a lightweight alternative to [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests). They can be used to create simple integration tests for packages while the module tests are used to test services or programs with a graphical user interface on a NixOS VM. Unittests that are included in the source code of a package should be executed in the `checkPhase`.
The following types of tests exists:
* [NixOS **module tests**](https://nixos.org/manual/nixos/stable/#sec-nixos-tests), which spawn one or more NixOS VMs. They exercise both NixOS modules and the packaged programs used within them. For example, a NixOS module test can start a web server VM running the `nginx` module, and a client VM running `curl` or a graphical `firefox`, and test that they can talk to each other and display the correct content.
* Nix **package tests** are a lightweight alternative to NixOS module tests. They should be used to create simple integration tests for packages, but cannot test NixOS services, and some programs with graphical user interfaces may also be difficult to test with them.
* The **`checkPhase` of a package**, which should execute the unit tests that are included in the source code of a package.
Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### Writing package tests {#ssec-package-tests-writing}
@ -568,7 +574,7 @@ let
inherit (phoronix-test-suite) pname version;
in
runCommand "${pname}-tests" { meta.timeout = 3; }
runCommand "${pname}-tests" { meta.timeout = 60; }
''
# automatic initial setup to prevent interactive questions
${phoronix-test-suite}/bin/phoronix-test-suite enterprise-setup >/dev/null
@ -602,3 +608,23 @@ Here are examples of package tests:
- [Spacy annotation test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/spacy/annotation-test/default.nix)
- [Libtorch test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/science/math/libtorch/test/default.nix)
- [Multiple tests for nanopb](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/nanopb/default.nix)
### Linking NixOS module tests to a package {#ssec-nixos-tests-linking}
Like [package tests](#ssec-package-tests-writing) as shown above, [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests) can also be linked to a package, so that the tests can be easily run when changing the related package.
For example, assuming we're packaging `nginx`, we can link its module test via `passthru.tests`:
```nix
{ stdenv, lib, nixosTests }:
stdenv.mkDerivation {
...
passthru.tests = {
nginx = nixosTests.nginx;
};
...
}
```

View File

@ -772,7 +772,7 @@ nameValuePair "some" 6
<title>Modifying each value of an attribute set</title>
<programlisting><![CDATA[
lib.attrsets.mapAttrs
(name: value: name + "-" value)
(name: value: name + "-" + value)
{ x = "foo"; y = "bar"; }
=> { x = "x-foo"; y = "y-bar"; }
]]></programlisting>

View File

@ -13,6 +13,7 @@ In the following is an example expression using `buildGoModule`, the following a
- `vendorSha256`: is the hash of the output of the intermediate fetcher derivation. `vendorSha256` can also take `null` as an input. When `null` is used as a value, rather than fetching the dependencies and vendoring them, we use the vendoring included within the source repo. If you'd like to not have to update this field on dependency changes, run `go mod vendor` in your source repo and set `vendorSha256 = null;`
- `runVend`: runs the vend command to generate the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build.
- `proxyVendor`: Fetches (go mod download) and proxies the vendor directory. This is useful if any dependency has case-insensitive conflicts which will produce platform dependant `vendorSha256` checksums.
```nix
pet = buildGoModule rec {
@ -112,16 +113,6 @@ done
Both `buildGoModule` and `buildGoPackage` can be tweaked to behave slightly differently, if the following attributes are used:
### `buildFlagsArray` and `buildFlags`: {#ex-goBuildFlags-noarray}
These attributes set build flags supported by `go build`. We recommend using `buildFlagsArray`.
```nix
buildFlagsArray = [
"-tags=release"
];
```
### `ldflags` {#var-go-ldflags}
Arguments to pass to the Go linker tool via the `-ldflags` argument of `go build`. The most common use case for this argument is to make the resulting executable aware of its own version. For example:
@ -134,6 +125,21 @@ Arguments to pass to the Go linker tool via the `-ldflags` argument of `go build
];
```
### `tags` {#var-go-tags}
Arguments to pass to the Go via the `-tags` argument of `go build`. For example:
```nix
tags = [
"production"
"sqlite"
];
```
```nix
tags = [ "production" ] ++ lib.optionals withSqlite [ "sqlite" ];
```
### `deleteVendor` {#var-go-deleteVendor}
Removes the pre-existing vendor directory. This should only be used if the dependencies included in the vendor folder are broken or incomplete.

View File

@ -139,11 +139,9 @@ the whitelist maintainers/scripts/luarocks-packages.csv and updated by running m
[luarocks2nix](https://github.com/nix-community/luarocks) is a tool capable of generating nix derivations from both rockspec and src.rock (and favors the src.rock).
The automation only goes so far though and some packages need to be customized.
These customizations go in `pkgs/development/lua-modules/overrides.nix`.
For instance if the rockspec defines `external_dependencies`, these need to be manually added in its rockspec file then it won't work.
For instance if the rockspec defines `external_dependencies`, these need to be manually added to the overrides.nix.
You can try converting luarocks packages to nix packages with the command `nix-shell -p luarocks-nix` and then `luarocks nix PKG_NAME`.
Nix rely on luarocks to install lua packages, basically it runs:
`luarocks make --deps-mode=none --tree $out`
#### Packaging a library manually {#packaging-a-library-manually}
@ -161,8 +159,8 @@ are not packaged for luarocks. You can see a few examples at `pkgs/top-level/lua
### Lua interpreters {#lua-interpreters}
Versions 5.1, 5.2 and 5.3 of the lua interpreter are available as
respectively `lua5_1`, `lua5_2` and `lua5_3`. Luajit is available too.
Versions 5.1, 5.2, 5.3 and 5.4 of the lua interpreter are available as
respectively `lua5_1`, `lua5_2`, `lua5_3` and `lua5_4`. Luajit is available too.
The Nix expressions for the interpreters can be found in `pkgs/development/interpreters/lua-5`.
#### Attributes on lua interpreters packages {#attributes-on-lua-interpreters-packages}

View File

@ -39,7 +39,7 @@ To add a package from NPM to nixpkgs:
1. Modify `pkgs/development/node-packages/node-packages.json` to add, update
or remove package entries to have it included in `nodePackages` and
`nodePackages_latest`.
2. Run the script: `(cd pkgs/development/node-packages && ./generate.sh)`.
2. Run the script: `cd pkgs/development/node-packages && ./generate.sh`.
3. Build your new package to test your changes:
`cd /path/to/nixpkgs && nix-build -A nodePackages.<new-or-updated-package>`.
To build against the latest stable Current Node.js version (e.g. 14.x):

View File

@ -129,7 +129,15 @@ rustPlatform.buildRustPackage rec {
```
This will retrieve the dependencies using fixed-output derivations from
the specified lockfile.
the specified lockfile. Note that setting `cargoLock.lockFile` doesn't
add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still required
to build a rust package. A simple fix is to use:
```nix
postPatch = ''
cp ${./Cargo.lock} Cargo.lock
'';
```
The output hash of each dependency that uses a git source must be
specified in the `outputHashes` attribute. For example:
@ -144,7 +152,7 @@ rustPlatform.buildRustPackage rec {
outputHashes = {
"finalfusion-0.14.0" = "17f4bsdzpcshwh74w5z119xjy2if6l2wgyjy56v621skr2r8y904";
};
}
};
# ...
}

View File

@ -309,7 +309,7 @@ Sample output2:
## Adding new plugins to nixpkgs {#adding-new-plugins-to-nixpkgs}
Nix expressions for Vim plugins are stored in [pkgs/misc/vim-plugins](/pkgs/misc/vim-plugins). For the vast majority of plugins, Nix expressions are automatically generated by running [`./update.py`](/pkgs/misc/vim-plugins/update.py). This creates a [generated.nix](/pkgs/misc/vim-plugins/generated.nix) file based on the plugins listed in [vim-plugin-names](/pkgs/misc/vim-plugins/vim-plugin-names). Plugins are listed in alphabetical order in `vim-plugin-names` using the format `[github username]/[repository]`. For example https://github.com/scrooloose/nerdtree becomes `scrooloose/nerdtree`.
Nix expressions for Vim plugins are stored in [pkgs/misc/vim-plugins](/pkgs/misc/vim-plugins). For the vast majority of plugins, Nix expressions are automatically generated by running [`./update.py`](/pkgs/misc/vim-plugins/update.py). This creates a [generated.nix](/pkgs/misc/vim-plugins/generated.nix) file based on the plugins listed in [vim-plugin-names](/pkgs/misc/vim-plugins/vim-plugin-names). Plugins are listed in alphabetical order in `vim-plugin-names` using the format `[github username]/[repository]@[gitref]`. For example https://github.com/scrooloose/nerdtree becomes `scrooloose/nerdtree`.
Some plugins require overrides in order to function properly. Overrides are placed in [overrides.nix](/pkgs/misc/vim-plugins/overrides.nix). Overrides are most often required when a plugin requires some dependencies, or extra steps are required during the build process. For example `deoplete-fish` requires both `deoplete-nvim` and `vim-fish`, and so the following override was added:

View File

@ -5,7 +5,7 @@ let
inherit (builtins) head tail length;
inherit (lib.trivial) and;
inherit (lib.strings) concatStringsSep sanitizeDerivationName;
inherit (lib.lists) fold concatMap concatLists;
inherit (lib.lists) fold foldr concatMap concatLists;
in
rec {
@ -152,8 +152,8 @@ rec {
=> { a = [ 2 3 ]; }
*/
foldAttrs = op: nul: list_of_attrs:
fold (n: a:
fold (name: o:
foldr (n: a:
foldr (name: o:
o // { ${name} = op n.${name} (a.${name} or nul); }
) a (attrNames n)
) {} list_of_attrs;
@ -455,7 +455,7 @@ rec {
=> true
*/
matchAttrs = pattern: attrs: assert isAttrs pattern;
fold and true (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
foldr and true (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
let pat = head values; val = head (tail values); in
if length values == 1 then false
else if isAttrs pat then isAttrs val && matchAttrs pat val

View File

@ -77,11 +77,11 @@ rec {
# Output : are reqs satisfied? It's asserted.
checkReqs = attrSet: argList: condList:
(
fold lib.and true
foldr lib.and true
(map (x: let name = (head x); in
((checkFlag attrSet name) ->
(fold lib.and true
(foldr lib.and true
(map (y: let val=(getValue attrSet argList y); in
(val!=null) && (val!=false))
(tail x))))) condList));
@ -177,7 +177,7 @@ rec {
# merge attributes with custom function handling the case that the attribute
# exists in both sets
mergeAttrsWithFunc = f: set1: set2:
fold (n: set: if set ? ${n}
foldr (n: set: if set ? ${n}
then setAttr set n (f set.${n} set2.${n})
else set )
(set2 // set1) (attrNames set2);
@ -196,7 +196,7 @@ rec {
mergeAttrsNoOverride = { mergeLists ? ["buildInputs" "propagatedBuildInputs"],
overrideSnd ? [ "buildPhase" ]
}: attrs1: attrs2:
fold (n: set:
foldr (n: set:
setAttr set n ( if set ? ${n}
then # merge
if elem n mergeLists # attribute contains list, merge them by concatenating
@ -224,7 +224,7 @@ rec {
mergeAttrBy2 = { mergeAttrBy = lib.mergeAttrs; }
// (maybeAttr "mergeAttrBy" {} x)
// (maybeAttr "mergeAttrBy" {} y); in
fold lib.mergeAttrs {} [
foldr lib.mergeAttrs {} [
x y
(mapAttrs ( a: v: # merge special names using given functions
if x ? ${a}

View File

@ -1,44 +1,56 @@
{ lib }:
let
spdx = lic: lic // {
url = "https://spdx.org/licenses/${lic.spdxId}.html";
lib.mapAttrs (lname: lset: let
defaultLicense = rec {
shortName = lname;
free = true; # Most of our licenses are Free, explicitly declare unfree additions as such!
deprecated = false;
};
in
lib.mapAttrs (n: v: v // { shortName = n; }) ({
mkLicense = licenseDeclaration: let
applyDefaults = license: defaultLicense // license;
applySpdx = license:
if license ? spdxId
then license // { url = "https://spdx.org/licenses/${license.spdxId}.html"; }
else license;
applyRedistributable = license: { redistributable = license.free; } // license;
in lib.pipe licenseDeclaration [
applyDefaults
applySpdx
applyRedistributable
];
in mkLicense lset) ({
/* License identifiers from spdx.org where possible.
* If you cannot find your license here, then look for a similar license or
* add it to this list. The URL mentioned above is a good source for inspiration.
*/
abstyles = spdx {
abstyles = {
spdxId = "Abstyles";
fullName = "Abstyles License";
};
afl20 = spdx {
afl20 = {
spdxId = "AFL-2.0";
fullName = "Academic Free License v2.0";
};
afl21 = spdx {
afl21 = {
spdxId = "AFL-2.1";
fullName = "Academic Free License v2.1";
};
afl3 = spdx {
afl3 = {
spdxId = "AFL-3.0";
fullName = "Academic Free License v3.0";
};
agpl3Only = spdx {
agpl3Only = {
spdxId = "AGPL-3.0-only";
fullName = "GNU Affero General Public License v3.0 only";
};
agpl3Plus = spdx {
agpl3Plus = {
spdxId = "AGPL-3.0-or-later";
fullName = "GNU Affero General Public License v3.0 or later";
};
@ -55,7 +67,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
apsl20 = spdx {
apsl20 = {
spdxId = "APSL-2.0";
fullName = "Apple Public Source License 2.0";
};
@ -65,72 +77,72 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://www.freedesktop.org/wiki/Arphic_Public_License/";
};
artistic1 = spdx {
artistic1 = {
spdxId = "Artistic-1.0";
fullName = "Artistic License 1.0";
};
artistic2 = spdx {
artistic2 = {
spdxId = "Artistic-2.0";
fullName = "Artistic License 2.0";
};
asl20 = spdx {
asl20 = {
spdxId = "Apache-2.0";
fullName = "Apache License 2.0";
};
boost = spdx {
boost = {
spdxId = "BSL-1.0";
fullName = "Boost Software License 1.0";
};
beerware = spdx {
beerware = {
spdxId = "Beerware";
fullName = "Beerware License";
};
blueOak100 = spdx {
blueOak100 = {
spdxId = "BlueOak-1.0.0";
fullName = "Blue Oak Model License 1.0.0";
};
bsd0 = spdx {
bsd0 = {
spdxId = "0BSD";
fullName = "BSD Zero Clause License";
};
bsd1 = spdx {
bsd1 = {
spdxId = "BSD-1-Clause";
fullName = "BSD 1-Clause License";
};
bsd2 = spdx {
bsd2 = {
spdxId = "BSD-2-Clause";
fullName = ''BSD 2-clause "Simplified" License'';
};
bsd2Patent = spdx {
bsd2Patent = {
spdxId = "BSD-2-Clause-Patent";
fullName = "BSD-2-Clause Plus Patent License";
};
bsd3 = spdx {
bsd3 = {
spdxId = "BSD-3-Clause";
fullName = ''BSD 3-clause "New" or "Revised" License'';
};
bsdOriginal = spdx {
bsdOriginal = {
spdxId = "BSD-4-Clause";
fullName = ''BSD 4-clause "Original" or "Old" License'';
};
bsdOriginalUC = spdx {
bsdOriginalUC = {
spdxId = "BSD-4-Clause-UC";
fullName = "BSD 4-Clause University of California-Specific";
};
bsdProtection = spdx {
bsdProtection = {
spdxId = "BSD-Protection";
fullName = "BSD Protection License";
};
@ -141,119 +153,119 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
clArtistic = spdx {
clArtistic = {
spdxId = "ClArtistic";
fullName = "Clarified Artistic License";
};
cc0 = spdx {
cc0 = {
spdxId = "CC0-1.0";
fullName = "Creative Commons Zero v1.0 Universal";
};
cc-by-nc-sa-20 = spdx {
cc-by-nc-sa-20 = {
spdxId = "CC-BY-NC-SA-2.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 2.0";
free = false;
};
cc-by-nc-sa-25 = spdx {
cc-by-nc-sa-25 = {
spdxId = "CC-BY-NC-SA-2.5";
fullName = "Creative Commons Attribution Non Commercial Share Alike 2.5";
free = false;
};
cc-by-nc-sa-30 = spdx {
cc-by-nc-sa-30 = {
spdxId = "CC-BY-NC-SA-3.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 3.0";
free = false;
};
cc-by-nc-sa-40 = spdx {
cc-by-nc-sa-40 = {
spdxId = "CC-BY-NC-SA-4.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 4.0";
free = false;
};
cc-by-nc-30 = spdx {
cc-by-nc-30 = {
spdxId = "CC-BY-NC-3.0";
fullName = "Creative Commons Attribution Non Commercial 3.0 Unported";
free = false;
};
cc-by-nc-40 = spdx {
cc-by-nc-40 = {
spdxId = "CC-BY-NC-4.0";
fullName = "Creative Commons Attribution Non Commercial 4.0 International";
free = false;
};
cc-by-nd-30 = spdx {
cc-by-nd-30 = {
spdxId = "CC-BY-ND-3.0";
fullName = "Creative Commons Attribution-No Derivative Works v3.00";
free = false;
};
cc-by-sa-25 = spdx {
cc-by-sa-25 = {
spdxId = "CC-BY-SA-2.5";
fullName = "Creative Commons Attribution Share Alike 2.5";
};
cc-by-30 = spdx {
cc-by-30 = {
spdxId = "CC-BY-3.0";
fullName = "Creative Commons Attribution 3.0";
};
cc-by-sa-30 = spdx {
cc-by-sa-30 = {
spdxId = "CC-BY-SA-3.0";
fullName = "Creative Commons Attribution Share Alike 3.0";
};
cc-by-40 = spdx {
cc-by-40 = {
spdxId = "CC-BY-4.0";
fullName = "Creative Commons Attribution 4.0";
};
cc-by-sa-40 = spdx {
cc-by-sa-40 = {
spdxId = "CC-BY-SA-4.0";
fullName = "Creative Commons Attribution Share Alike 4.0";
};
cddl = spdx {
cddl = {
spdxId = "CDDL-1.0";
fullName = "Common Development and Distribution License 1.0";
};
cecill20 = spdx {
cecill20 = {
spdxId = "CECILL-2.0";
fullName = "CeCILL Free Software License Agreement v2.0";
};
cecill-b = spdx {
cecill-b = {
spdxId = "CECILL-B";
fullName = "CeCILL-B Free Software License Agreement";
};
cecill-c = spdx {
cecill-c = {
spdxId = "CECILL-C";
fullName = "CeCILL-C Free Software License Agreement";
};
cpal10 = spdx {
cpal10 = {
spdxId = "CPAL-1.0";
fullName = "Common Public Attribution License 1.0";
};
cpl10 = spdx {
cpl10 = {
spdxId = "CPL-1.0";
fullName = "Common Public License 1.0";
};
curl = spdx {
curl = {
spdxId = "curl";
fullName = "curl License";
};
doc = spdx {
doc = {
spdxId = "DOC";
fullName = "DOC License";
};
@ -264,12 +276,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
efl10 = spdx {
efl10 = {
spdxId = "EFL-1.0";
fullName = "Eiffel Forum License v1.0";
};
efl20 = spdx {
efl20 = {
spdxId = "EFL-2.0";
fullName = "Eiffel Forum License v2.0";
};
@ -280,12 +292,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
epl10 = spdx {
epl10 = {
spdxId = "EPL-1.0";
fullName = "Eclipse Public License 1.0";
};
epl20 = spdx {
epl20 = {
spdxId = "EPL-2.0";
fullName = "Eclipse Public License 2.0";
};
@ -296,42 +308,42 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
eupl11 = spdx {
eupl11 = {
spdxId = "EUPL-1.1";
fullName = "European Union Public License 1.1";
};
eupl12 = spdx {
eupl12 = {
spdxId = "EUPL-1.2";
fullName = "European Union Public License 1.2";
};
fdl11Only = spdx {
fdl11Only = {
spdxId = "GFDL-1.1-only";
fullName = "GNU Free Documentation License v1.1 only";
};
fdl11Plus = spdx {
fdl11Plus = {
spdxId = "GFDL-1.1-or-later";
fullName = "GNU Free Documentation License v1.1 or later";
};
fdl12Only = spdx {
fdl12Only = {
spdxId = "GFDL-1.2-only";
fullName = "GNU Free Documentation License v1.2 only";
};
fdl12Plus = spdx {
fdl12Plus = {
spdxId = "GFDL-1.2-or-later";
fullName = "GNU Free Documentation License v1.2 or later";
};
fdl13Only = spdx {
fdl13Only = {
spdxId = "GFDL-1.3-only";
fullName = "GNU Free Documentation License v1.3 only";
};
fdl13Plus = spdx {
fdl13Plus = {
spdxId = "GFDL-1.3-or-later";
fullName = "GNU Free Documentation License v1.3 or later";
};
@ -346,7 +358,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
fullName = "Unspecified free software license";
};
ftl = spdx {
ftl = {
spdxId = "FTL";
fullName = "Freetype Project License";
};
@ -362,22 +374,22 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
gpl1Only = spdx {
gpl1Only = {
spdxId = "GPL-1.0-only";
fullName = "GNU General Public License v1.0 only";
};
gpl1Plus = spdx {
gpl1Plus = {
spdxId = "GPL-1.0-or-later";
fullName = "GNU General Public License v1.0 or later";
};
gpl2Only = spdx {
gpl2Only = {
spdxId = "GPL-2.0-only";
fullName = "GNU General Public License v2.0 only";
};
gpl2Classpath = spdx {
gpl2Classpath = {
spdxId = "GPL-2.0-with-classpath-exception";
fullName = "GNU General Public License v2.0 only (with Classpath exception)";
};
@ -392,17 +404,17 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://www.mysql.com/about/legal/licensing/foss-exception";
};
gpl2Plus = spdx {
gpl2Plus = {
spdxId = "GPL-2.0-or-later";
fullName = "GNU General Public License v2.0 or later";
};
gpl3Only = spdx {
gpl3Only = {
spdxId = "GPL-3.0-only";
fullName = "GNU General Public License v3.0 only";
};
gpl3Plus = spdx {
gpl3Plus = {
spdxId = "GPL-3.0-or-later";
fullName = "GNU General Public License v3.0 or later";
};
@ -412,12 +424,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://fedoraproject.org/wiki/Licensing/GPL_Classpath_Exception";
};
hpnd = spdx {
hpnd = {
spdxId = "HPND";
fullName = "Historic Permission Notice and Disclaimer";
};
hpndSellVariant = spdx {
hpndSellVariant = {
fullName = "Historical Permission Notice and Disclaimer - sell variant";
spdxId = "HPND-sell-variant";
};
@ -428,12 +440,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://old.calculate-linux.org/packages/licenses/iASL";
};
ijg = spdx {
ijg = {
spdxId = "IJG";
fullName = "Independent JPEG Group License";
};
imagemagick = spdx {
imagemagick = {
fullName = "ImageMagick License";
spdxId = "imagemagick";
};
@ -450,17 +462,17 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
ipa = spdx {
ipa = {
spdxId = "IPA";
fullName = "IPA Font License";
};
ipl10 = spdx {
ipl10 = {
spdxId = "IPL-1.0";
fullName = "IBM Public License v1.0";
};
isc = spdx {
isc = {
spdxId = "ISC";
fullName = "ISC License";
};
@ -478,52 +490,52 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
lgpl2Only = spdx {
lgpl2Only = {
spdxId = "LGPL-2.0-only";
fullName = "GNU Library General Public License v2 only";
};
lgpl2Plus = spdx {
lgpl2Plus = {
spdxId = "LGPL-2.0-or-later";
fullName = "GNU Library General Public License v2 or later";
};
lgpl21Only = spdx {
lgpl21Only = {
spdxId = "LGPL-2.1-only";
fullName = "GNU Lesser General Public License v2.1 only";
};
lgpl21Plus = spdx {
lgpl21Plus = {
spdxId = "LGPL-2.1-or-later";
fullName = "GNU Lesser General Public License v2.1 or later";
};
lgpl3Only = spdx {
lgpl3Only = {
spdxId = "LGPL-3.0-only";
fullName = "GNU Lesser General Public License v3.0 only";
};
lgpl3Plus = spdx {
lgpl3Plus = {
spdxId = "LGPL-3.0-or-later";
fullName = "GNU Lesser General Public License v3.0 or later";
};
lgpllr = spdx {
lgpllr = {
spdxId = "LGPLLR";
fullName = "Lesser General Public License For Linguistic Resources";
};
libpng = spdx {
libpng = {
spdxId = "Libpng";
fullName = "libpng License";
};
libpng2 = spdx {
libpng2 = {
spdxId = "libpng-2.0"; # Used since libpng 1.6.36.
fullName = "PNG Reference Library version 2";
};
libtiff = spdx {
libtiff = {
spdxId = "libtiff";
fullName = "libtiff License";
};
@ -533,22 +545,22 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://opensource.franz.com/preamble.html";
};
llvm-exception = spdx {
llvm-exception = {
spdxId = "LLVM-exception";
fullName = "LLVM Exception"; # LLVM exceptions to the Apache 2.0 License
};
lppl12 = spdx {
lppl12 = {
spdxId = "LPPL-1.2";
fullName = "LaTeX Project Public License v1.2";
};
lppl13c = spdx {
lppl13c = {
spdxId = "LPPL-1.3c";
fullName = "LaTeX Project Public License v1.3c";
};
lpl-102 = spdx {
lpl-102 = {
spdxId = "LPL-1.02";
fullName = "Lucent Public License v1.02";
};
@ -560,43 +572,43 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
# spdx.org does not (yet) differentiate between the X11 and Expat versions
# for details see https://en.wikipedia.org/wiki/MIT_License#Various_versions
mit = spdx {
mit = {
spdxId = "MIT";
fullName = "MIT License";
};
mpl10 = spdx {
mpl10 = {
spdxId = "MPL-1.0";
fullName = "Mozilla Public License 1.0";
};
mpl11 = spdx {
mpl11 = {
spdxId = "MPL-1.1";
fullName = "Mozilla Public License 1.1";
};
mpl20 = spdx {
mpl20 = {
spdxId = "MPL-2.0";
fullName = "Mozilla Public License 2.0";
};
mspl = spdx {
mspl = {
spdxId = "MS-PL";
fullName = "Microsoft Public License";
};
nasa13 = spdx {
nasa13 = {
spdxId = "NASA-1.3";
fullName = "NASA Open Source Agreement 1.3";
free = false;
};
ncsa = spdx {
ncsa = {
spdxId = "NCSA";
fullName = "University of Illinois/NCSA Open Source License";
};
nposl3 = spdx {
nposl3 = {
spdxId = "NPOSL-3.0";
fullName = "Non-Profit Open Software License 3.0";
};
@ -613,53 +625,53 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
odbl = spdx {
odbl = {
spdxId = "ODbL-1.0";
fullName = "Open Data Commons Open Database License v1.0";
};
ofl = spdx {
ofl = {
spdxId = "OFL-1.1";
fullName = "SIL Open Font License 1.1";
};
openldap = spdx {
openldap = {
spdxId = "OLDAP-2.8";
fullName = "Open LDAP Public License v2.8";
};
openssl = spdx {
openssl = {
spdxId = "OpenSSL";
fullName = "OpenSSL License";
};
osl2 = spdx {
osl2 = {
spdxId = "OSL-2.0";
fullName = "Open Software License 2.0";
};
osl21 = spdx {
osl21 = {
spdxId = "OSL-2.1";
fullName = "Open Software License 2.1";
};
osl3 = spdx {
osl3 = {
spdxId = "OSL-3.0";
fullName = "Open Software License 3.0";
};
parity70 = spdx {
parity70 = {
spdxId = "Parity-7.0.0";
fullName = "Parity Public License 7.0.0";
url = "https://paritylicense.com/versions/7.0.0.html";
};
php301 = spdx {
php301 = {
spdxId = "PHP-3.01";
fullName = "PHP License v3.01";
};
postgresql = spdx {
postgresql = {
spdxId = "PostgreSQL";
fullName = "PostgreSQL License";
};
@ -670,7 +682,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
psfl = spdx {
psfl = {
spdxId = "Python-2.0";
fullName = "Python Software Foundation License version 2";
url = "https://docs.python.org/license.html";
@ -691,12 +703,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://prosperitylicense.com/versions/3.0.0.html";
};
qhull = spdx {
qhull = {
spdxId = "Qhull";
fullName = "Qhull License";
};
qpl = spdx {
qpl = {
spdxId = "QPL-1.0";
fullName = "Q Public License 1.0";
};
@ -706,22 +718,22 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://qwt.sourceforge.io/qwtlicense.html";
};
ruby = spdx {
ruby = {
spdxId = "Ruby";
fullName = "Ruby License";
};
sendmail = spdx {
sendmail = {
spdxId = "Sendmail";
fullName = "Sendmail License";
};
sgi-b-20 = spdx {
sgi-b-20 = {
spdxId = "SGI-B-2.0";
fullName = "SGI Free Software License B v2.0";
};
sleepycat = spdx {
sleepycat = {
spdxId = "Sleepycat";
fullName = "Sleepycat License";
};
@ -737,6 +749,10 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
fullName = "Server Side Public License";
url = "https://www.mongodb.com/licensing/server-side-public-license";
free = false;
# NOTE Debatable.
# The license a slightly modified AGPL but still considered unfree by the
# OSI for what seem like political reasons
redistributable = true; # Definitely redistributable though, it's an AGPL derivative
};
stk = {
@ -745,7 +761,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://github.com/thestk/stk/blob/master/LICENSE";
};
tcltk = spdx {
tcltk = {
spdxId = "TCL";
fullName = "TCL/TK License";
};
@ -763,25 +779,27 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
unfreeRedistributable = {
fullName = "Unfree redistributable";
free = false;
redistributable = true;
};
unfreeRedistributableFirmware = {
fullName = "Unfree redistributable firmware";
redistributable = true;
# Note: we currently consider these "free" for inclusion in the
# channel and NixOS images.
};
unicode-dfs-2015 = spdx {
unicode-dfs-2015 = {
spdxId = "Unicode-DFS-2015";
fullName = "Unicode License Agreement - Data Files and Software (2015)";
};
unicode-dfs-2016 = spdx {
unicode-dfs-2016 = {
spdxId = "Unicode-DFS-2016";
fullName = "Unicode License Agreement - Data Files and Software (2016)";
};
unlicense = spdx {
unlicense = {
spdxId = "Unlicense";
fullName = "The Unlicense";
};
@ -791,7 +809,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://oss.oracle.com/licenses/upl/";
};
vim = spdx {
vim = {
spdxId = "Vim";
fullName = "Vim License";
};
@ -802,17 +820,17 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
free = false;
};
vsl10 = spdx {
vsl10 = {
spdxId = "VSL-1.0";
fullName = "Vovida Software License v1.0";
};
watcom = spdx {
watcom = {
spdxId = "Watcom-1.0";
fullName = "Sybase Open Watcom Public License 1.0";
};
w3c = spdx {
w3c = {
spdxId = "W3C";
fullName = "W3C Software Notice and License";
};
@ -822,12 +840,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "https://fedoraproject.org/wiki/Licensing:Wadalab?rd=Licensing/Wadalab";
};
wtfpl = spdx {
wtfpl = {
spdxId = "WTFPL";
fullName = "Do What The F*ck You Want To Public License";
};
wxWindows = spdx {
wxWindows = {
spdxId = "wxWindows";
fullName = "wxWindows Library Licence, Version 3.1";
};
@ -837,68 +855,68 @@ lib.mapAttrs (n: v: v // { shortName = n; }) ({
url = "http://mcj.sourceforge.net/authors.html#xfig"; # https is broken
};
zlib = spdx {
zlib = {
spdxId = "Zlib";
fullName = "zlib License";
};
zpl20 = spdx {
zpl20 = {
spdxId = "ZPL-2.0";
fullName = "Zope Public License 2.0";
};
zpl21 = spdx {
zpl21 = {
spdxId = "ZPL-2.1";
fullName = "Zope Public License 2.1";
};
} // {
# TODO: remove legacy aliases
agpl3 = spdx {
agpl3 = {
spdxId = "AGPL-3.0";
fullName = "GNU Affero General Public License v3.0";
deprecated = true;
};
fdl11 = spdx {
fdl11 = {
spdxId = "GFDL-1.1";
fullName = "GNU Free Documentation License v1.1";
deprecated = true;
};
fdl12 = spdx {
fdl12 = {
spdxId = "GFDL-1.2";
fullName = "GNU Free Documentation License v1.2";
deprecated = true;
};
fdl13 = spdx {
fdl13 = {
spdxId = "GFDL-1.3";
fullName = "GNU Free Documentation License v1.3";
deprecated = true;
};
gpl1 = spdx {
gpl1 = {
spdxId = "GPL-1.0";
fullName = "GNU General Public License v1.0";
deprecated = true;
};
gpl2 = spdx {
gpl2 = {
spdxId = "GPL-2.0";
fullName = "GNU General Public License v2.0";
deprecated = true;
};
gpl3 = spdx {
gpl3 = {
spdxId = "GPL-3.0";
fullName = "GNU General Public License v3.0";
deprecated = true;
};
lgpl2 = spdx {
lgpl2 = {
spdxId = "LGPL-2.0";
fullName = "GNU Library General Public License v2";
deprecated = true;
};
lgpl21 = spdx {
lgpl21 = {
spdxId = "LGPL-2.1";
fullName = "GNU Lesser General Public License v2.1";
deprecated = true;
};
lgpl3 = spdx {
lgpl3 = {
spdxId = "LGPL-3.0";
fullName = "GNU Lesser General Public License v3.0";
deprecated = true;

View File

@ -11,6 +11,7 @@ let
filter
foldl'
head
tail
isAttrs
isBool
isDerivation
@ -144,7 +145,7 @@ rec {
if def.value != first.value then
throw "The option `${showOption loc}' has conflicting definition values:${showDefs [ first def ]}"
else
first) (head defs) defs).value;
first) (head defs) (tail defs)).value;
/* Extracts values of all "value" keys of the given list.

View File

@ -26,22 +26,22 @@ let
# Linux
"aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux"
"armv7l-linux" "i686-linux" "mipsel-linux" "powerpc64-linux"
"powerpc64le-linux" "riscv32-linux" "riscv64-linux" "x86_64-linux"
"m68k-linux" "s390-linux"
"armv7l-linux" "i686-linux" "m68k-linux" "mipsel-linux"
"powerpc64-linux" "powerpc64le-linux" "riscv32-linux"
"riscv64-linux" "s390-linux" "x86_64-linux"
# MMIXware
"mmix-mmixware"
# NetBSD
"aarch64-netbsd" "armv6l-netbsd" "armv7a-netbsd" "armv7l-netbsd"
"i686-netbsd" "mipsel-netbsd" "powerpc-netbsd" "riscv32-netbsd"
"riscv64-netbsd" "x86_64-netbsd"
"i686-netbsd" "m68k-netbsd" "mipsel-netbsd" "powerpc-netbsd"
"riscv32-netbsd" "riscv64-netbsd" "x86_64-netbsd"
# none
"aarch64-none" "arm-none" "armv6l-none" "avr-none" "i686-none" "msp430-none"
"or1k-none" "powerpc-none" "riscv32-none" "riscv64-none" "vc4-none" "m68k-none"
"s390-none" "x86_64-none"
"aarch64-none" "arm-none" "armv6l-none" "avr-none" "i686-none"
"msp430-none" "or1k-none" "m68k-none" "powerpc-none"
"riscv32-none" "riscv64-none" "s390-none" "vc4-none" "x86_64-none"
# OpenBSD
"i686-openbsd" "x86_64-openbsd"

View File

@ -127,9 +127,10 @@ rec {
# GNU build systems assume that older NetBSD architectures are using a.out.
gnuNetBSDDefaultExecFormat = cpu:
if (cpu.family == "x86" && cpu.bits == 32) ||
(cpu.family == "arm" && cpu.bits == 32) ||
(cpu.family == "sparc" && cpu.bits == 32)
if (cpu.family == "arm" && cpu.bits == 32) ||
(cpu.family == "sparc" && cpu.bits == 32) ||
(cpu.family == "m68k" && cpu.bits == 32) ||
(cpu.family == "x86" && cpu.bits == 32)
then execFormats.aout
else execFormats.elf;

View File

@ -315,6 +315,12 @@ rec {
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# >=5.12 fails with:
# drivers/net/ethernet/micrel/ks8851_common.o: in function `ks8851_probe_common':
# ks8851_common.c:(.text+0x179c): undefined reference to `__this_module'
# See: https://lore.kernel.org/netdev/20210116164828.40545-1-marex@denx.de/T/
KS8851_MLL y
'';
};
gcc = {

View File

@ -132,6 +132,16 @@ runTests {
expected = [ 1 1 0 ];
};
testFunctionArgsFunctor = {
expr = functionArgs { __functor = self: { a, b }: null; };
expected = { a = false; b = false; };
};
testFunctionArgsSetFunctionArgs = {
expr = functionArgs (setFunctionArgs (args: args.x) { x = false; });
expected = { x = false; };
};
# STRINGS
testConcatMapStrings = {

View File

@ -29,7 +29,7 @@ with lib.systems.doubles; lib.runTests {
testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */);
testillumos = mseteq illumos [ "x86_64-solaris" ];
testlinux = mseteq linux [ "aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux" "armv7l-linux" "i686-linux" "mipsel-linux" "riscv32-linux" "riscv64-linux" "x86_64-linux" "powerpc64-linux" "powerpc64le-linux" "m68k-linux" "s390-linux" ];
testnetbsd = mseteq netbsd [ "aarch64-netbsd" "armv6l-netbsd" "armv7a-netbsd" "armv7l-netbsd" "i686-netbsd" "mipsel-netbsd" "powerpc-netbsd" "riscv32-netbsd" "riscv64-netbsd" "x86_64-netbsd" ];
testnetbsd = mseteq netbsd [ "aarch64-netbsd" "armv6l-netbsd" "armv7a-netbsd" "armv7l-netbsd" "i686-netbsd" "m68k-netbsd" "mipsel-netbsd" "powerpc-netbsd" "riscv32-netbsd" "riscv64-netbsd" "x86_64-netbsd" ];
testopenbsd = mseteq openbsd [ "i686-openbsd" "x86_64-openbsd" ];
testwindows = mseteq windows [ "i686-cygwin" "x86_64-cygwin" "i686-windows" "x86_64-windows" ];
testunix = mseteq unix (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos ++ cygwin ++ redox);

View File

@ -308,7 +308,7 @@ rec {
info = msg: builtins.trace "INFO: ${msg}";
showWarnings = warnings: res: lib.fold (w: x: warn w x) res warnings;
showWarnings = warnings: res: lib.foldr (w: x: warn w x) res warnings;
## Function annotations
@ -334,7 +334,10 @@ rec {
has the same return type and semantics as builtins.functionArgs.
setFunctionArgs : (a b) Map String Bool.
*/
functionArgs = f: f.__functionArgs or (builtins.functionArgs f);
functionArgs = f:
if f ? __functor
then f.__functionArgs or (lib.functionArgs (f.__functor f))
else builtins.functionArgs f;
/* Check whether something is a function or something
annotated with function args.

View File

@ -390,6 +390,12 @@
githubId = 1318982;
name = "Anders Claesson";
};
akho = {
name = "Alexander Khodyrev";
email = "a@akho.name";
github = "akho";
githubId = 104951;
};
akru = {
email = "mail@akru.me";
github = "akru";
@ -1412,6 +1418,12 @@
githubId = 10221570;
name = "Bo Bakker";
};
bobby285271 = {
name = "Bobby Rong";
email = "rjl931189261@126.com";
github = "bobby285271";
githubId = 20080233;
};
bobvanderlinden = {
email = "bobvanderlinden@gmail.com";
github = "bobvanderlinden";
@ -1905,6 +1917,12 @@
githubId = 811527;
name = "Christopher Jefferson";
};
chrispickard = {
email = "chrispickard9@gmail.com";
github = "chrispickard";
githubId = 1438690;
name = "Chris Pickard";
};
chrisrosset = {
email = "chris@rosset.org.uk";
github = "chrisrosset";
@ -2271,6 +2289,12 @@
fingerprint = "1C4E F4FE 7F8E D8B7 1E88 CCDF BAB1 D15F B7B4 D4CE";
}];
};
d-xo = {
email = "hi@d-xo.org";
github = "d-xo";
githubId = 6689924;
name = "David Terry";
};
dadada = {
name = "dadada";
email = "dadada@dadada.li";
@ -3225,6 +3249,12 @@
fingerprint = "2D37 1AD2 7E2B BC77 97E1 B759 6C79 278F 3FCD CC02";
}];
};
ereslibre = {
email = "ereslibre@ereslibre.es";
github = "ereslibre";
githubId = 8706;
name = "Rafael Fernández López";
};
ericbmerritt = {
email = "eric@afiniate.com";
github = "ericbmerritt";
@ -3995,6 +4025,16 @@
fingerprint = "5214 2D39 A7CE F8FA 872B CA7F DE62 E1E2 A614 5556";
}];
};
gpanders = {
name = "Gregory Anders";
email = "greg@gpanders.com";
github = "gpanders";
githubId = 8965202;
keys = [{
longkeyid = "rsa2048/0x56E93C2FB6B08BDB";
fingerprint = "B9D5 0EDF E95E ECD0 C135 00A9 56E9 3C2F B6B0 8BDB";
}];
};
gpyh = {
email = "yacine.hmito@gmail.com";
github = "yacinehmito";
@ -4255,6 +4295,12 @@
githubId = 131599;
name = "Martin Weinelt";
};
hexagonal-sun = {
email = "dev@mattleach.net";
github = "hexagonal-sun";
githubId = 222664;
name = "Matthew Leach";
};
hh = {
email = "hh@m-labs.hk";
github = "HarryMakes";
@ -4535,6 +4581,12 @@
githubId = 592849;
name = "Ilya Kolpakov";
};
ilyakooo0 = {
name = "Ilya Kostyuchenko";
email = "ilyakooo0@gmail.com";
github = "ilyakooo0";
githubId = 6209627;
};
imalison = {
email = "IvanMalison@gmail.com";
github = "IvanMalison";
@ -4933,6 +4985,12 @@
fingerprint = "7EB1 C02A B62B B464 6D7C E4AE D1D0 9DE1 69EA 19A0";
}];
};
jgart = {
email = "jgart@dismail.de";
github = "jgarte";
githubId = 47760695;
name = "Jorge Gomez";
};
jgeerds = {
email = "jascha@geerds.org";
github = "jgeerds";
@ -6664,6 +6722,12 @@
githubId = 35892750;
name = "Maxine Aubrey";
};
maxhille = {
email = "mh@lambdasoup.com";
github = "maxhille";
githubId = 693447;
name = "Max Hille";
};
maxhbr = {
email = "nixos@maxhbr.dev";
github = "maxhbr";
@ -6919,6 +6983,12 @@
fingerprint = "3DEE 1C55 6E1C 3DC5 54F5 875A 003F 2096 411B 5F92";
}];
};
michaeladler = {
email = "therisen06@gmail.com";
github = "michaeladler";
githubId = 1575834;
name = "Michael Adler";
};
michaelpj = {
email = "michaelpj@gmail.com";
github = "michaelpj";
@ -7465,6 +7535,12 @@
email = "natedevv@gmail.com";
name = "Nathan Moore";
};
nathanruiz = {
email = "nathanruiz@protonmail.com";
github = "nathanruiz";
githubId = 18604892;
name = "Nathan Ruiz";
};
nathan-gs = {
email = "nathan@nathan.gs";
github = "nathan-gs";
@ -7820,6 +7896,12 @@
githubId = 1839979;
name = "Niklas Thörne";
};
nukaduka = {
email = "ksgokte@gmail.com";
github = "NukaDuka";
githubId = 22592293;
name = "Kartik Gokte";
};
nullx76 = {
email = "nix@xirion.net";
github = "NULLx76";
@ -8510,6 +8592,12 @@
github = "polygon";
githubId = 51489;
};
polykernel = {
email = "81340136+polykernel@users.noreply.github.com";
github = "polykernel";
githubId = 81340136;
name = "polykernel";
};
polyrod = {
email = "dc1mdp@gmail.com";
github = "polyrod";
@ -9592,6 +9680,12 @@
githubId = 1567527;
name = "Sebastian Hyberts";
};
sebtm = {
email = "mail@sebastian-sellmeier.de";
github = "sebtm";
githubId = 17243347;
name = "Sebastian Sellmeier";
};
sellout = {
email = "greg@technomadic.org";
github = "sellout";
@ -10988,6 +11082,12 @@
fingerprint = "E631 8869 586F 99B4 F6E6 D785 5942 58F0 389D 2802";
}];
};
twitchyliquid64 = {
name = "Tom";
email = "twitchyliquid64@ciphersink.net";
github = "twitchyliquid64";
githubId = 6328589;
};
typetetris = {
email = "ericwolf42@mail.com";
github = "typetetris";
@ -11310,10 +11410,6 @@
githubId = 3413119;
name = "Vonfry";
};
vozz = {
email = "oliver.huntuk@gmail.com";
name = "Oliver Hunt";
};
vq = {
email = "vq@erq.se";
name = "Daniel Nilsson";
@ -11644,12 +11740,6 @@
githubId = 1962985;
name = "Vincenzo Mantova";
};
xwvvvvwx = {
email = "davidterry@posteo.de";
github = "xwvvvvwx";
githubId = 6689924;
name = "David Terry";
};
xzfc = {
email = "xzfcpw@gmail.com";
github = "xzfc";
@ -11674,6 +11764,12 @@
githubId = 3705333;
name = "Dmitry V.";
};
yayayayaka = {
email = "nixpkgs@uwu.is";
github = "yayayayaka";
githubId = 73759599;
name = "Lara A.";
};
yegortimoshenko = {
email = "yegortimoshenko@riseup.net";
github = "yegortimoshenko";

View File

@ -30,9 +30,10 @@ EOF
# clear environment here to avoid things like allowing broken builds in
sort -iu "$tmpfile" >> "$broken_config"
env -i maintainers/scripts/haskell/regenerate-hackage-packages.sh
env -i maintainers/scripts/haskell/regenerate-transitive-broken-packages.sh
env -i maintainers/scripts/haskell/regenerate-hackage-packages.sh
clear="env -u HOME -u NIXPKGS_CONFIG"
$clear maintainers/scripts/haskell/regenerate-hackage-packages.sh
$clear maintainers/scripts/haskell/regenerate-transitive-broken-packages.sh
$clear maintainers/scripts/haskell/regenerate-hackage-packages.sh
if [[ "${1:-}" == "--do-commit" ]]; then
git add $broken_config

View File

@ -0,0 +1,21 @@
#! /usr/bin/env nix-shell
#! nix-shell -i bash -p nix curl gnused -I nixpkgs=.
# On Hackage every package description shows a category "Distributions" which
# lists a "NixOS" version.
# This script uploads a csv to hackage which will update the displayed versions
# based on the current versions in nixpkgs. This happens with a simple http
# request.
# For authorization you just need to have any valid hackage account. This
# script uses the `username` and `password-command` field from your
# ~/.cabal/config file.
# e.g. username: maralorn
# password-command: pass hackage.haskell.org (this can be any command, but not an arbitrary shell expression. Like cabal we only read the first output line and ignore the rest.)
# Those fields are specified under `upload` on the `cabal` man page.
package_list="$(nix-build -A haskell.package-list)/nixos-hackage-packages.csv"
username=$(grep "^username:" ~/.cabal/config | sed "s/^username: //")
password_command=$(grep "^password-command:" ~/.cabal/config | sed "s/^password-command: //")
curl -u "$username:$($password_command | head -n1)" --digest -H "Content-type: text/csv" -T "$package_list" http://hackage.haskell.org/distro/NixOS/packages.csv

View File

@ -1,87 +1,89 @@
# nix name, luarocks name, server, version,luaversion,maintainers
alt-getopt,,,,,arobyn
ansicolors,,,,,
argparse,,,,,
basexx,,,,,
binaryheap,,,,,vcunat
bit32,,,,lua5_1,lblasc
busted,,,,,
cassowary,,,,,marsam alerque
cjson,lua-cjson,,,,
compat53,,,,,vcunat
cosmo,,,,,marsam
coxpcall,,,1.17.0-1,,
cqueues,,,,,vcunat
cyrussasl,,,,,
digestif,,,,lua5_3,
dkjson,,,,,
fifo,,,,,
http,,,,,vcunat
inspect,,,,,
ldbus,,http://luarocks.org/dev,,,
ldoc,,,,,
lgi,,,,,
linenoise,,,,,
ljsyscall,,,,lua5_1,lblasc
lpeg,,,,,vyp
lpeg_patterns,,,,,
lpeglabel,,,,,
lpty,,,,,
lrexlib-gnu,,,,,
lrexlib-pcre,,,,,vyp
lrexlib-posix,,,,,
ltermbox,,,,,
lua-cmsgpack,,,,,
lua-iconv,,,,,
lua-lsp,,http://luarocks.org/dev,,,
lua-messagepack,,,,,
lua-resty-http,,,,,
lua-resty-jwt,,,,,
lua-resty-openidc,,,,,
lua-resty-openssl,,,,,
lua-resty-session,,,,,
lua-term,,,,,
lua-toml,,,,,
lua-zlib,,,,,koral
lua_cliargs,,,,,
luabitop,,,,,
luacheck,,,,,
luacov,,,,,
luadbi,,,,,
luadbi-mysql,,,,,
luadbi-postgresql,,,,,
luadbi-sqlite3,,,,,
luadoc,,,,,
luaepnf,,,,,
luaevent,,,,,
luaexpat,,,1.3.0-1,,arobyn flosse
luaffi,,http://luarocks.org/dev,,,
luafilesystem,,,1.7.0-2,,flosse
lualogging,,,,,
luaossl,,,,lua5_1,
luaposix,,,,,vyp lblasc
luarepl,,,,,
luasec,,,,,flosse
luasocket,,,,,
luasql-sqlite3,,,,,vyp
luassert,,,,,
luasystem,,,,,
luautf8,,,,,pstn
luazip,,,,,
lua-yajl,,,,,pstn
luuid,,,,,
luv,,,,,
lyaml,,,,,lblasc
markdown,,,,,
mediator_lua,,,,,
mpack,,,,,
moonscript,,,,,arobyn
nvim-client,,,,,
penlight,,,,,
rapidjson,,,,,
readline,,,,,
say,,,,,
std__debug,std._debug,,,,
std_normalize,std.normalize,,,,
stdlib,,,,,vyp
vstruct,,,,,
name,server,version,luaversion,maintainers
alt-getopt,,,,arobyn
ansicolors,,,,
bit32,,5.3.0-1,lua5_1,lblasc
argparse,,,,
basexx,,,,
binaryheap,,,,vcunat
busted,,,,
cassowary,,,,marsam alerque
compat53,,0.7-1,,vcunat
cosmo,,,,marsam
coxpcall,,1.17.0-1,,
cqueues,,,,vcunat
cyrussasl,,,,
digestif,,0.2-1,lua5_3,
dkjson,,,,
fifo,,,,
gitsigns.nvim,,,lua5_1,
http,,0.3-0,,vcunat
inspect,,,,
ldbus,http://luarocks.org/dev,,,
ldoc,,,,
lgi,,,,
linenoise,,,,
ljsyscall,,,lua5_1,lblasc
lpeg,,,,vyp
lpeg_patterns,,,,
lpeglabel,,,,
lpty,,,,
lrexlib-gnu,,,,
lrexlib-pcre,,,,vyp
lrexlib-posix,,,,
ltermbox,,,,
lua-cjson,,,,
lua-cmsgpack,,,,
lua-iconv,,,,
lua-lsp,http://luarocks.org/dev,,,
lua-messagepack,,,,
lua-resty-http,,,,
lua-resty-jwt,,,,
lua-resty-openidc,,,,
lua-resty-openssl,,,,
lua-resty-session,,,,
lua-term,,,,
lua-toml,,,,
lua-zlib,,,,koral
lua_cliargs,,,,
luabitop,,,,
luacheck,,,,
luacov,,,,
luadbi,,,,
luadbi-mysql,,,,
luadbi-postgresql,,,,
luadbi-sqlite3,,,,
luadoc,,,,
luaepnf,,,,
luaevent,,,,
luaexpat,,1.3.0-1,,arobyn flosse
luaffi,http://luarocks.org/dev,,,
luafilesystem,,1.7.0-2,,flosse
lualogging,,,,
luaossl,,,lua5_1,
luaposix,,34.1.1-1,,vyp lblasc
luarepl,,,,
luasec,,,,flosse
luasocket,,,,
luasql-sqlite3,,,,vyp
luassert,,,,
luasystem,,,,
luautf8,,,,pstn
luazip,,,,
lua-yajl,,,,pstn
luuid,,,,
luv,,1.30.0-0,,
lyaml,,,,lblasc
markdown,,,,
mediator_lua,,,,
mpack,,,,
moonscript,,,,arobyn
nvim-client,,,,
penlight,,,,
plenary.nvim,,,lua5_1,
rapidjson,,,,
readline,,,,
say,,,,
std._debug,,,,
std.normalize,,,,
stdlib,,,,vyp
vstruct,,,,

1 # nix name name luarocks name server version luaversion maintainers
2 alt-getopt alt-getopt arobyn
3 ansicolors ansicolors
4 argparse bit32 5.3.0-1 lua5_1 lblasc
5 basexx argparse
6 binaryheap basexx vcunat
7 bit32 binaryheap lua5_1 lblasc vcunat
8 busted busted
9 cassowary cassowary marsam alerque
10 cjson compat53 lua-cjson 0.7-1 vcunat
11 compat53 cosmo vcunat marsam
12 cosmo coxpcall 1.17.0-1 marsam
13 coxpcall cqueues 1.17.0-1 vcunat
14 cqueues cyrussasl vcunat
15 cyrussasl digestif 0.2-1 lua5_3
16 digestif dkjson lua5_3
17 dkjson fifo
18 fifo gitsigns.nvim lua5_1
19 http http 0.3-0 vcunat
20 inspect inspect
21 ldbus ldbus http://luarocks.org/dev
22 ldoc ldoc
23 lgi lgi
24 linenoise linenoise
25 ljsyscall ljsyscall lua5_1 lblasc
26 lpeg lpeg vyp
27 lpeg_patterns lpeg_patterns
28 lpeglabel lpeglabel
29 lpty lpty
30 lrexlib-gnu lrexlib-gnu
31 lrexlib-pcre lrexlib-pcre vyp
32 lrexlib-posix lrexlib-posix
33 ltermbox ltermbox
34 lua-cmsgpack lua-cjson
35 lua-iconv lua-cmsgpack
36 lua-lsp lua-iconv http://luarocks.org/dev
37 lua-messagepack lua-lsp http://luarocks.org/dev
38 lua-resty-http lua-messagepack
39 lua-resty-jwt lua-resty-http
40 lua-resty-openidc lua-resty-jwt
41 lua-resty-openssl lua-resty-openidc
42 lua-resty-session lua-resty-openssl
43 lua-term lua-resty-session
44 lua-toml lua-term
45 lua-zlib lua-toml koral
46 lua_cliargs lua-zlib koral
47 luabitop lua_cliargs
48 luacheck luabitop
49 luacov luacheck
50 luadbi luacov
51 luadbi-mysql luadbi
52 luadbi-postgresql luadbi-mysql
53 luadbi-sqlite3 luadbi-postgresql
54 luadoc luadbi-sqlite3
55 luaepnf luadoc
56 luaevent luaepnf
57 luaexpat luaevent 1.3.0-1 arobyn flosse
58 luaffi luaexpat http://luarocks.org/dev 1.3.0-1 arobyn flosse
59 luafilesystem luaffi http://luarocks.org/dev 1.7.0-2 flosse
60 lualogging luafilesystem 1.7.0-2 flosse
61 luaossl lualogging lua5_1
62 luaposix luaossl lua5_1 vyp lblasc
63 luarepl luaposix 34.1.1-1 vyp lblasc
64 luasec luarepl flosse
65 luasocket luasec flosse
66 luasql-sqlite3 luasocket vyp
67 luassert luasql-sqlite3 vyp
68 luasystem luassert
69 luautf8 luasystem pstn
70 luazip luautf8 pstn
71 lua-yajl luazip pstn
72 luuid lua-yajl pstn
73 luv luuid
74 lyaml luv 1.30.0-0 lblasc
75 markdown lyaml lblasc
76 mediator_lua markdown
77 mpack mediator_lua
78 moonscript mpack arobyn
79 nvim-client moonscript arobyn
80 penlight nvim-client
81 rapidjson penlight
82 readline plenary.nvim lua5_1
83 say rapidjson
84 std__debug readline std._debug
85 std_normalize say std.normalize
86 stdlib std._debug vyp
87 vstruct std.normalize
88 stdlib vyp
89 vstruct

View File

@ -28,6 +28,7 @@ from pathlib import Path
from typing import Dict, List, Optional, Tuple, Union, Any, Callable
from urllib.parse import urljoin, urlparse
from tempfile import NamedTemporaryFile
from dataclasses import dataclass
import git
@ -82,6 +83,13 @@ def make_request(url: str) -> urllib.request.Request:
headers["Authorization"] = f"token {token}"
return urllib.request.Request(url, headers=headers)
@dataclass
class PluginDesc:
owner: str
repo: str
branch: str
alias: str
class Repo:
def __init__(
@ -201,15 +209,39 @@ class Editor:
deprecated: Optional[Path] = None,
cache_file: Optional[str] = None,
):
log.debug("get_plugins:", get_plugins)
self.name = name
self.root = root
self.get_plugins = get_plugins
self.generate_nix = generate_nix
self._generate_nix = generate_nix
self.default_in = default_in or root.joinpath(f"{name}-plugin-names")
self.default_out = default_out or root.joinpath("generated.nix")
self.deprecated = deprecated or root.joinpath("deprecated.json")
self.cache_file = cache_file or f"{name}-plugin-cache.json"
def get_current_plugins(self):
"""To fill the cache"""
return get_current_plugins(self)
def load_plugin_spec(self, plugin_file) -> List[PluginDesc]:
return load_plugin_spec(plugin_file)
def generate_nix(self, plugins, outfile):
'''Returns nothing for now, writes directly to outfile'''
self._generate_nix(plugins, outfile)
def get_update(self, input_file: str, outfile: str, proc: int):
return get_update(input_file, outfile, proc, editor=self)
@property
def attr_path(self):
return self.name + "Plugins"
def rewrite_input(self, *args, **kwargs):
return rewrite_input(*args, **kwargs)
class CleanEnvironment(object):
def __enter__(self) -> None:
@ -228,7 +260,9 @@ class CleanEnvironment(object):
def get_current_plugins(editor: Editor) -> List[Plugin]:
with CleanEnvironment():
out = subprocess.check_output(["nix", "eval", "--json", editor.get_plugins])
cmd = ["nix", "eval", "--json", editor.get_plugins]
log.debug("Running command %s", cmd)
out = subprocess.check_output(cmd)
data = json.loads(out)
plugins = []
for name, attr in data.items():
@ -244,12 +278,13 @@ def prefetch_plugin(
alias: Optional[str],
cache: "Optional[Cache]" = None,
) -> Tuple[Plugin, Dict[str, str]]:
log.info("Prefetching plugin %s", repo_name)
log.info(f"Fetching last commit for plugin {user}/{repo_name}@{branch}")
repo = Repo(user, repo_name, branch, alias)
commit, date = repo.latest_commit()
has_submodules = repo.has_submodules()
cached_plugin = cache[commit] if cache else None
if cached_plugin is not None:
log.debug("Cache hit !")
cached_plugin.name = alias or repo_name
cached_plugin.date = date
return cached_plugin, repo.redirect
@ -306,8 +341,7 @@ def check_results(
sys.exit(1)
def parse_plugin_line(line: str) -> Tuple[str, str, str, Optional[str]]:
def parse_plugin_line(line: str) -> PluginDesc:
branch = "master"
alias = None
name, repo = line.split("/")
@ -317,15 +351,15 @@ def parse_plugin_line(line: str) -> Tuple[str, str, str, Optional[str]]:
if "@" in repo:
repo, branch = repo.split("@")
return (name.strip(), repo.strip(), branch.strip(), alias)
return PluginDesc(name.strip(), repo.strip(), branch.strip(), alias)
def load_plugin_spec(plugin_file: str) -> List[Tuple[str, str, str, Optional[str]]]:
def load_plugin_spec(plugin_file: str) -> List[PluginDesc]:
plugins = []
with open(plugin_file) as f:
for line in f:
plugin = parse_plugin_line(line)
if not plugin[0]:
if not plugin.owner:
msg = f"Invalid repository {line}, must be in the format owner/repo[ as alias]"
print(msg, file=sys.stderr)
sys.exit(1)
@ -387,12 +421,11 @@ class Cache:
def prefetch(
args: Tuple[str, str, str, Optional[str]], cache: Cache
args: PluginDesc, cache: Cache
) -> Tuple[str, str, Union[Exception, Plugin], dict]:
assert len(args) == 4
owner, repo, branch, alias = args
owner, repo = args.owner, args.repo
try:
plugin, redirect = prefetch_plugin(owner, repo, branch, alias, cache)
plugin, redirect = prefetch_plugin(owner, repo, args.branch, args.alias, cache)
cache[plugin.commit] = plugin
return (owner, repo, plugin, redirect)
except Exception as e:
@ -433,7 +466,7 @@ def rewrite_input(
with open(input_file, "w") as f:
f.writelines(lines)
# TODO move to Editor ?
def parse_args(editor: Editor):
parser = argparse.ArgumentParser(
description=(
@ -446,7 +479,7 @@ def parse_args(editor: Editor):
dest="add_plugins",
default=[],
action="append",
help=f"Plugin to add to {editor.name}Plugins from Github in the form owner/repo",
help=f"Plugin to add to {editor.attr_path} from Github in the form owner/repo",
)
parser.add_argument(
"--input-names",
@ -493,11 +526,11 @@ def commit(repo: git.Repo, message: str, files: List[Path]) -> None:
def get_update(input_file: str, outfile: str, proc: int, editor: Editor):
cache: Cache = Cache(get_current_plugins(editor), editor.cache_file)
cache: Cache = Cache(editor.get_current_plugins(), editor.cache_file)
_prefetch = functools.partial(prefetch, cache=cache)
def update() -> dict:
plugin_names = load_plugin_spec(input_file)
plugin_names = editor.load_plugin_spec(input_file)
try:
pool = Pool(processes=proc)
@ -522,33 +555,33 @@ def update_plugins(editor: Editor):
log.info("Start updating plugins")
nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True)
update = get_update(args.input_file, args.outfile, args.proc, editor)
update = editor.get_update(args.input_file, args.outfile, args.proc)
redirects = update()
rewrite_input(args.input_file, editor.deprecated, redirects)
editor.rewrite_input(args.input_file, editor.deprecated, redirects)
autocommit = not args.no_commit
if autocommit:
commit(nixpkgs_repo, f"{editor.name}Plugins: update", [args.outfile])
commit(nixpkgs_repo, f"{editor.attr_path}: update", [args.outfile])
if redirects:
update()
if autocommit:
commit(
nixpkgs_repo,
f"{editor.name}Plugins: resolve github repository redirects",
f"{editor.attr_path}: resolve github repository redirects",
[args.outfile, args.input_file, editor.deprecated],
)
for plugin_line in args.add_plugins:
rewrite_input(args.input_file, editor.deprecated, append=(plugin_line + "\n",))
editor.rewrite_input(args.input_file, editor.deprecated, append=(plugin_line + "\n",))
update()
plugin = fetch_plugin_from_pluginline(plugin_line)
if autocommit:
commit(
nixpkgs_repo,
"{editor}Plugins.{name}: init at {version}".format(
"{editor.attr_path}.{name}: init at {version}".format(
editor=editor.name, name=plugin.normalized_name, version=plugin.version
),
[args.outfile, args.input_file],

View File

@ -1,136 +1,179 @@
#!/usr/bin/env nix-shell
#!nix-shell update-luarocks-shell.nix -i bash
#!nix-shell -p nix-prefetch-git luarocks-nix python3 python3Packages.GitPython nix -i python3
# You'll likely want to use
# ``
# nixpkgs $ maintainers/scripts/update-luarocks-packages pkgs/development/lua-modules/generated-packages.nix
# ``
# to update all libraries in that folder.
# to debug, redirect stderr to stdout with 2>&1
# format:
# $ nix run nixpkgs.python3Packages.black -c black update.py
# type-check:
# $ nix run nixpkgs.python3Packages.mypy -c mypy update.py
# linted:
# $ nix run nixpkgs.python3Packages.flake8 -c flake8 --ignore E501,E265,E402 update.py
# stop the script upon C-C
set -eu -o pipefail
import inspect
import os
import tempfile
import shutil
from dataclasses import dataclass
import subprocess
import csv
import logging
CSV_FILE="maintainers/scripts/luarocks-packages.csv"
from typing import List
from pathlib import Path
LOG_LEVELS = {
logging.getLevelName(level): level for level in [
logging.DEBUG, logging.INFO, logging.WARN, logging.ERROR ]
}
log = logging.getLogger()
log.addHandler(logging.StreamHandler())
ROOT = Path(os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))).parent.parent
from pluginupdate import Editor, parse_args, update_plugins, PluginDesc, CleanEnvironment
PKG_LIST="maintainers/scripts/luarocks-packages.csv"
TMP_FILE="$(mktemp)"
# Set in the update-luarocks-shell.nix
NIXPKGS_PATH="$LUAROCKS_NIXPKGS_PATH"
export LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua"
GENERATED_NIXFILE="pkgs/development/lua-modules/generated-packages.nix"
LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua"
# 10 is a pretty arbitrary number of simultaneous jobs, but it is generally
# impolite to hit a webserver with *too* many simultaneous connections :)
PARALLEL_JOBS=1
exit_trap() {
local lc="$BASH_COMMAND" rc=$?
test $rc -eq 0 || echo -e "*** error $rc: $lc.\nGenerated temporary file in $TMP_FILE" >&2
}
print_help() {
echo "Usage: $0 <GENERATED_FILE>"
echo "(most likely pkgs/development/lua-modules/generated-packages.nix)"
echo ""
echo " -c <CSV_FILE> to set the list of luarocks package to generate"
exit 1
}
if [ $# -lt 1 ]; then
print_help
exit 1
fi
trap exit_trap EXIT
while getopts ":hc:" opt; do
case $opt in
h)
print_help
;;
c)
echo "Loading package list from $OPTARG !" >&2
CSV_FILE="$OPTARG"
;;
\?)
echo "Invalid option: -$OPTARG" >&2
;;
esac
shift $((OPTIND - 1))
done
GENERATED_NIXFILE="$1"
HEADER="
/* ${GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
HEADER = """
/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
Regenerate it with:
nixpkgs$ ${0} ${GENERATED_NIXFILE}
nixpkgs$ ./maintainers/scripts/update-luarocks-packages
These packages are manually refined in lua-overrides.nix
You can customize the generated packages in pkgs/development/lua-modules/overrides.nix
*/
{ self, stdenv, lib, fetchurl, fetchgit, pkgs, ... } @ args:
""".format(GENERATED_NIXFILE=GENERATED_NIXFILE)
FOOTER="""
}
/* GENERATED - do not edit this file */
"""
@dataclass
class LuaPlugin:
name: str
version: str
server: str
luaversion: str
maintainers: str
@property
def normalized_name(self) -> str:
return self.name.replace(".", "-")
# rename Editor to LangUpdate/ EcosystemUpdater
class LuaEditor(Editor):
def get_current_plugins(self):
return []
def load_plugin_spec(self, input_file) -> List[PluginDesc]:
luaPackages = []
csvfilename=input_file
log.info("Loading package descriptions from %s", csvfilename)
with open(csvfilename, newline='') as csvfile:
reader = csv.DictReader(csvfile,)
for row in reader:
# name,server,version,luaversion,maintainers
plugin = LuaPlugin(**row)
luaPackages.append(plugin)
return luaPackages
@property
def attr_path(self):
return "luaPackages"
def get_update(self, input_file: str, outfile: str, _: int):
def update() -> dict:
plugin_specs = self.load_plugin_spec(input_file)
self.generate_nix(plugin_specs, outfile)
redirects = []
return redirects
return update
def rewrite_input(self, *args, **kwargs):
# not implemented yet
pass
def generate_nix(
plugins: List[LuaPlugin],
outfilename: str
):
sorted_plugins = sorted(plugins, key=lambda v: v.name.lower())
# plug = {}
# selon le manifest luarocks.org/manifest
def _generate_pkg_nix(plug):
cmd = [ "luarocks", "nix", plug.name]
if plug.server:
cmd.append(f"--only-server={plug.server}")
if plug.maintainers:
cmd.append(f"--maintainers={plug.maintainers}")
if plug.version:
cmd.append(plug.version)
if plug.luaversion:
with CleanEnvironment():
local_pkgs = str(ROOT.resolve())
cmd2 = ["nix-build", "--no-out-link", local_pkgs, "-A", f"{plug.luaversion}"]
log.debug("running %s", cmd2)
lua_drv_path=subprocess.check_output(cmd2, text=True).strip()
cmd.append(f"--lua-dir={lua_drv_path}/bin")
log.debug("running %s", cmd)
output = subprocess.check_output(cmd, text=True)
return output
with tempfile.NamedTemporaryFile("w+") as f:
f.write(HEADER)
f.write("""
{ self, stdenv, lib, fetchurl, fetchgit, ... } @ args:
self: super:
with self;
{
"
""")
FOOTER="
}
/* GENERATED */
"
for plugin in sorted_plugins:
function convert_pkg() {
nix_pkg_name="$1"
lua_pkg_name="$2"
server="$3"
pkg_version="$4"
lua_version="$5"
maintainers="$6"
nix_expr = _generate_pkg_nix(plugin)
f.write(f"{plugin.normalized_name} = {nix_expr}"
)
f.write(FOOTER)
f.flush()
if [ "${nix_pkg_name:0:1}" == "#" ]; then
echo "Skipping comment ${*}" >&2
return
fi
if [ -z "$lua_pkg_name" ]; then
echo "Using nix_name as lua_pkg_name for '$nix_pkg_name'" >&2
lua_pkg_name="$nix_pkg_name"
fi
# if everything went fine, move the generated file to its destination
# using copy since move doesn't work across disks
shutil.copy(f.name, outfilename)
echo "Building expression for $lua_pkg_name (version $pkg_version) from server [$server]" >&2
luarocks_args=(nix)
if [[ -n $server ]]; then
luarocks_args+=("--only-server=$server")
fi
if [[ -n $maintainers ]]; then
luarocks_args+=("--maintainers=$maintainers")
fi
if [[ -n $lua_version ]]; then
lua_drv_path=$(nix-build --no-out-link "$NIXPKGS_PATH" -A "$lua_version")
luarocks_args+=("--lua-dir=$lua_drv_path/bin")
fi
luarocks_args+=("$lua_pkg_name")
if [[ -n $pkg_version ]]; then
luarocks_args+=("$pkg_version")
fi
echo "Running 'luarocks ${luarocks_args[*]}'" >&2
if drv="$nix_pkg_name = $(luarocks "${luarocks_args[@]}")"; then
echo "$drv"
else
echo "Failed to convert $nix_pkg_name" >&2
return 1
fi
}
print(f"updated {outfilename}")
# params needed when called via callPackage
echo "$HEADER" | tee "$TMP_FILE"
def load_plugin_spec():
pass
# Ensure parallel can run our bash function
export -f convert_pkg
export SHELL=bash
# Read each line in the csv file and run convert_pkg for each, in parallel
parallel --group --keep-order --halt now,fail=1 --jobs "$PARALLEL_JOBS" --colsep ',' convert_pkg {} <"$CSV_FILE" | tee -a "$TMP_FILE"
# close the set
echo "$FOOTER" | tee -a "$TMP_FILE"
def main():
cp "$TMP_FILE" "$GENERATED_NIXFILE"
editor = LuaEditor("lua", ROOT, '', generate_nix,
default_in = ROOT.joinpath(PKG_LIST),
default_out = ROOT.joinpath(GENERATED_NIXFILE)
)
args = parse_args(editor)
log.setLevel(LOG_LEVELS[args.debug])
update_plugins(editor)
if __name__ == "__main__":
main()
# vim: set ts=4 sw=4 ft=sh:

View File

@ -6,7 +6,10 @@ set -euf -o pipefail
(
cd pkgs/development/ruby-modules/with-packages
rm -f gemset.nix Gemfile.lock
bundle lock
# Since bundler 2+, the lock command generates a platform-dependent
# Gemfile.lock, hence causing to bundix to generate a gemset tied to the
# platform from where it was executed.
BUNDLE_FORCE_RUBY_PLATFORM=1 bundle lock
bundix
mv gemset.nix ../../../top-level/ruby-packages.nix
rm -f Gemfile.lock

View File

@ -114,8 +114,9 @@ with lib.maintainers; {
haskell = {
members = [
maralorn
cdepillabout
expipiplus1
maralorn
sternenseemann
];
scope = "Maintain Haskell packages and infrastructure.";
@ -161,10 +162,19 @@ with lib.maintainers; {
ralith
mjlbach
dandellion
sumnerevans
];
scope = "Maintain the ecosystem around Matrix, a decentralized messenger.";
};
pantheon = {
members = [
davidak
bobby285271
];
scope = "Maintain Pantheon desktop environment and platform.";
};
php = {
members = [
aanderse

View File

@ -12,7 +12,7 @@ let
# E.g. if some `options` came from modules in ${pkgs.customModules}/nix,
# you'd need to include `extraSources = [ pkgs.customModules ]`
prefixesToStrip = map (p: "${toString p}/") ([ ../../.. ] ++ extraSources);
stripAnyPrefixes = lib.flip (lib.fold lib.removePrefix) prefixesToStrip;
stripAnyPrefixes = lib.flip (lib.foldr lib.removePrefix) prefixesToStrip;
optionsDoc = buildPackages.nixosOptionsDoc {
inherit options revision;

View File

@ -0,0 +1,6 @@
# Linking NixOS tests to packages {#sec-linking-nixos-tests-to-packages}
You can link NixOS module tests to the packages that they exercised,
so that the tests can be run automatically during code review when the package gets changed.
This is
[described in the nixpkgs manual](https://nixos.org/manual/nixpkgs/stable/#ssec-nixos-tests-linking).

View File

@ -16,4 +16,5 @@ xlink:href="https://github.com/NixOS/nixpkgs/tree/master/nixos/tests">nixos/test
<xi:include href="../from_md/development/writing-nixos-tests.section.xml" />
<xi:include href="../from_md/development/running-nixos-tests.section.xml" />
<xi:include href="../from_md/development/running-nixos-tests-interactively.section.xml" />
<xi:include href="../from_md/development/linking-nixos-tests-to-packages.section.xml" />
</chapter>

View File

@ -5,7 +5,7 @@ when developing or debugging a test:
```ShellSession
$ nix-build nixos/tests/login.nix -A driverInteractive
$ ./result/bin/nixos-test-driver
$ ./result/bin/nixos-test-driver --interactive
starting VDE switch for network 1
>
```
@ -24,20 +24,11 @@ back into the test driver command line upon its completion. This allows
you to inspect the state of the VMs after the test (e.g. to debug the
test script).
To just start and experiment with the VMs, run:
```ShellSession
$ nix-build nixos/tests/login.nix -A driverInteractive
$ ./result/bin/nixos-run-vms
```
The script `nixos-run-vms` starts the virtual machines defined by test.
You can re-use the VM states coming from a previous run by setting the
`--keep-vm-state` flag.
```ShellSession
$ ./result/bin/nixos-run-vms --keep-vm-state
$ ./result/bin/nixos-test-driver --interactive --keep-vm-state
```
The machine state is stored in the `$TMPDIR/vm-state-machinename`

View File

@ -0,0 +1,10 @@
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-linking-nixos-tests-to-packages">
<title>Linking NixOS tests to packages</title>
<para>
You can link NixOS module tests to the packages that they exercised,
so that the tests can be run automatically during code review when
the package gets changed. This is
<link xlink:href="https://nixos.org/manual/nixpkgs/stable/#ssec-nixos-tests-linking">described
in the nixpkgs manual</link>.
</para>
</section>

View File

@ -6,7 +6,7 @@
</para>
<programlisting>
$ nix-build nixos/tests/login.nix -A driverInteractive
$ ./result/bin/nixos-test-driver
$ ./result/bin/nixos-test-driver --interactive
starting VDE switch for network 1
&gt;
</programlisting>
@ -25,23 +25,12 @@ starting VDE switch for network 1
completion. This allows you to inspect the state of the VMs after
the test (e.g. to debug the test script).
</para>
<para>
To just start and experiment with the VMs, run:
</para>
<programlisting>
$ nix-build nixos/tests/login.nix -A driverInteractive
$ ./result/bin/nixos-run-vms
</programlisting>
<para>
The script <literal>nixos-run-vms</literal> starts the virtual
machines defined by test.
</para>
<para>
You can re-use the VM states coming from a previous run by setting
the <literal>--keep-vm-state</literal> flag.
</para>
<programlisting>
$ ./result/bin/nixos-run-vms --keep-vm-state
$ ./result/bin/nixos-test-driver --interactive --keep-vm-state
</programlisting>
<para>
The machine state is stored in the

View File

@ -125,6 +125,52 @@
<link linkend="opt-services.prometheus.exporters.buildkite-agent.enable">services.prometheus.exporters.buildkite-agent</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/prometheus/influxdb_exporter">influxdb-exporter</link>
a Prometheus exporter that exports metrics received on an
InfluxDB compatible endpoint is now available as
<link linkend="opt-services.prometheus.exporters.influxdb.enable">services.prometheus.exporters.influxdb</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/matrix-discord/mx-puppet-discord">mx-puppet-discord</link>,
a discord puppeting bridge for matrix. Available as
<link linkend="opt-services.mx-puppet-discord.enable">services.mx-puppet-discord</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://www.meshcommander.com/meshcentral2/overview">MeshCentral</link>,
a remote administration service (<quote>TeamViewer but
self-hosted and with more features</quote>) is now available
with a package and a module:
<link linkend="opt-services.meshcentral.enable">services.meshcentral.enable</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/Arksine/moonraker">moonraker</link>,
an API web server for Klipper. Available as
<link linkend="opt-services.moonraker.enable">moonraker</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/influxdata/influxdb">influxdb2</link>,
a Scalable datastore for metrics, events, and real-time
analytics. Available as
<link linkend="opt-services.influxdb2.enable">services.influxdb2</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://posativ.org/isso/">isso</link>, a
commenting server similar to Disqus. Available as
<link linkend="opt-services.isso.enable">isso</link>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-21.11-incompatibilities">
@ -136,6 +182,13 @@
from 1.0.4 to 3.0.1
</para>
</listitem>
<listitem>
<para>
The <literal>erigon</literal> ethereum node has moved to a new
database format in <literal>2021-05-04</literal>, and requires
a full resync
</para>
</listitem>
<listitem>
<para>
<literal>services.geoip-updater</literal> was broken and has
@ -555,6 +608,101 @@
6.0.0 to 9.0.0
</para>
</listitem>
<listitem>
<para>
<literal>tt-rss</literal> was upgraded to the commit on
2021-06-21, which has breaking changes. If you use
<literal>services.tt-rss.extraConfig</literal> you should
migrate to the <literal>putenv</literal>-style configuration.
See
<link xlink:href="https://community.tt-rss.org/t/rip-config-php-hello-classes-config-php/4337">this
Discourse post</link> in the tt-rss forums for more details.
</para>
</listitem>
<listitem>
<para>
The following Visual Studio Code extensions were renamed to
keep the naming convention uniform.
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<literal>bbenoist.Nix</literal> -&gt;
<literal>bbenoist.nix</literal>
</para>
</listitem>
<listitem>
<para>
<literal>CoenraadS.bracket-pair-colorizer</literal> -&gt;
<literal>coenraads.bracket-pair-colorizer</literal>
</para>
</listitem>
<listitem>
<para>
<literal>golang.Go</literal> -&gt;
<literal>golang.go</literal>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
<literal>services.uptimed</literal> now uses
<literal>/var/lib/uptimed</literal> as its stateDirectory
instead of <literal>/var/spool/uptimed</literal>. Make sure to
move all files to the new directory.
</para>
</listitem>
<listitem>
<para>
Deprecated package aliases in <literal>emacs.pkgs.*</literal>
have been removed. These aliases were remnants of the old
Emacs package infrastructure. We now use exact upstream names
wherever possible.
</para>
</listitem>
<listitem>
<para>
<literal>programs.neovim.runtime</literal> switched to a
<literal>linkFarm</literal> internally, making it impossible
to use wildcards in the <literal>source</literal> argument.
</para>
</listitem>
<listitem>
<para>
The <literal>openrazer</literal> and
<literal>openrazer-daemon</literal> packages as well as the
<literal>hardware.openrazer</literal> module now require users
to be members of the <literal>openrazer</literal> group
instead of <literal>plugdev</literal>. With this change, users
no longer need be granted the entire set of
<literal>plugdev</literal> group permissions, which can
include permissions other than those required by
<literal>openrazer</literal>. This is desirable from a
security point of view. The setting
<link xlink:href="options.html#opt-services.hardware.openrazer.users"><literal>harware.openrazer.users</literal></link>
can be used to add users to the <literal>openrazer</literal>
group.
</para>
</listitem>
<listitem>
<para>
The <literal>yambar</literal> package has been split into
<literal>yambar</literal> and
<literal>yambar-wayland</literal>, corresponding to the xorg
and wayland backend respectively. Please switch to
<literal>yambar-wayland</literal> if you are on wayland.
</para>
</listitem>
<listitem>
<para>
The <literal>services.minio</literal> module gained an
additional option <literal>consoleAddress</literal>, that
configures the address and port the web UI is listening, it
defaults to <literal>:9001</literal>. To be able to access the
web UI this port needs to be opened in the firewall.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-21.11-notable-changes">
@ -702,6 +850,37 @@
option.
</para>
</listitem>
<listitem>
<para>
The
<link xlink:href="options.html#opt-services.syncoid.enable">services.syncoid.enable</link>
module now properly drops ZFS permissions after usage. Before
it delegated permissions to whole pools instead of datasets
and didnt clean up after execution. You can manually look
this up for your pools by running
<literal>zfs allow your-pool-name</literal> and use
<literal>zfs unallow syncoid your-pool-name</literal> to clean
this up.
</para>
</listitem>
<listitem>
<para>
Zfs: <literal>latestCompatibleLinuxPackages</literal> is now
exported on the zfs package. One can use
<literal>boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;</literal>
to always track the latest compatible kernel with a given
version of zfs.
</para>
</listitem>
<listitem>
<para>
Nginx will use the value of
<literal>sslTrustedCertificate</literal> if provided for a
virtual host, even if <literal>enableACME</literal> is set.
This is useful for providers not using the same certificate to
sign OCSP responses and server certificates.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -39,10 +39,26 @@ pt-services.clipcat.enable).
- [buildkite-agent-metrics](https://github.com/buildkite/buildkite-agent-metrics), a command-line tool for collecting Buildkite agent metrics, now has a Prometheus exporter available as [services.prometheus.exporters.buildkite-agent](#opt-services.prometheus.exporters.buildkite-agent.enable).
- [influxdb-exporter](https://github.com/prometheus/influxdb_exporter) a Prometheus exporter that exports metrics received on an InfluxDB compatible endpoint is now available as [services.prometheus.exporters.influxdb](#opt-services.prometheus.exporters.influxdb.enable).
- [mx-puppet-discord](https://github.com/matrix-discord/mx-puppet-discord), a discord puppeting bridge for matrix. Available as [services.mx-puppet-discord](#opt-services.mx-puppet-discord.enable).
- [MeshCentral](https://www.meshcommander.com/meshcentral2/overview), a remote administration service ("TeamViewer but self-hosted and with more features") is now available with a package and a module: [services.meshcentral.enable](#opt-services.meshcentral.enable)
- [moonraker](https://github.com/Arksine/moonraker), an API web server for Klipper.
Available as [moonraker](#opt-services.moonraker.enable).
- [influxdb2](https://github.com/influxdata/influxdb), a Scalable datastore for metrics, events, and real-time analytics. Available as [services.influxdb2](#opt-services.influxdb2.enable).
- [isso](https://posativ.org/isso/), a commenting server similar to Disqus.
Available as [isso](#opt-services.isso.enable)
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
- The `staticjinja` package has been upgraded from 1.0.4 to 3.0.1
- The `erigon` ethereum node has moved to a new database format in `2021-05-04`, and requires a full resync
- `services.geoip-updater` was broken and has been replaced by [services.geoipupdate](options.html#opt-services.geoipupdate.enable).
- PHP 7.3 is no longer supported due to upstream not supporting this version for the entire lifecycle of the 21.11 release.
@ -142,6 +158,27 @@ pt-services.clipcat.enable).
- the `mingw-64` package has been upgraded from 6.0.0 to 9.0.0
- `tt-rss` was upgraded to the commit on 2021-06-21, which has breaking changes. If you use `services.tt-rss.extraConfig` you should migrate to the `putenv`-style configuration. See [this Discourse post](https://community.tt-rss.org/t/rip-config-php-hello-classes-config-php/4337) in the tt-rss forums for more details.
- The following Visual Studio Code extensions were renamed to keep the naming convention uniform.
- `bbenoist.Nix` -> `bbenoist.nix`
- `CoenraadS.bracket-pair-colorizer` -> `coenraads.bracket-pair-colorizer`
- `golang.Go` -> `golang.go`
- `services.uptimed` now uses `/var/lib/uptimed` as its stateDirectory instead of `/var/spool/uptimed`. Make sure to move all files to the new directory.
- Deprecated package aliases in `emacs.pkgs.*` have been removed. These aliases were remnants of the old Emacs package infrastructure. We now use exact upstream names wherever possible.
- `programs.neovim.runtime` switched to a `linkFarm` internally, making it impossible to use wildcards in the `source` argument.
- The `openrazer` and `openrazer-daemon` packages as well as the `hardware.openrazer` module now require users to be members of the `openrazer` group instead of `plugdev`. With this change, users no longer need be granted the entire set of `plugdev` group permissions, which can include permissions other than those required by `openrazer`. This is desirable from a security point of view. The setting [`harware.openrazer.users`](options.html#opt-services.hardware.openrazer.users) can be used to add users to the `openrazer` group.
- The `yambar` package has been split into `yambar` and `yambar-wayland`, corresponding to the xorg and wayland backend respectively. Please switch to `yambar-wayland` if you are on wayland.
- The `services.minio` module gained an additional option `consoleAddress`, that
configures the address and port the web UI is listening, it defaults to `:9001`.
To be able to access the web UI this port needs to be opened in the firewall.
## Other Notable Changes {#sec-release-21.11-notable-changes}
- The setting [`services.openssh.logLevel`](options.html#opt-services.openssh.logLevel) `"VERBOSE"` `"INFO"`. This brings NixOS in line with upstream and other Linux distributions, and reduces log spam on servers due to bruteforcing botnets.
@ -183,3 +220,9 @@ pt-services.clipcat.enable).
- NSS modules which should come after `dns` should use mkAfter.
- The [networking.wireless.iwd](options.html#opt-networking.wireless.iwd.enable) module has a new [networking.wireless.iwd.settings](options.html#opt-networking.wireless.iwd.settings) option.
- The [services.syncoid.enable](options.html#opt-services.syncoid.enable) module now properly drops ZFS permissions after usage. Before it delegated permissions to whole pools instead of datasets and didn't clean up after execution. You can manually look this up for your pools by running `zfs allow your-pool-name` and use `zfs unallow syncoid your-pool-name` to clean this up.
- Zfs: `latestCompatibleLinuxPackages` is now exported on the zfs package. One can use `boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;` to always track the latest compatible kernel with a given version of zfs.
- Nginx will use the value of `sslTrustedCertificate` if provided for a virtual host, even if `enableACME` is set. This is useful for providers not using the same certificate to sign OCSP responses and server certificates.

101
nixos/lib/test-driver/test-driver.py Normal file → Executable file
View File

@ -24,7 +24,6 @@ import sys
import telnetlib
import tempfile
import time
import traceback
import unicodedata
CHAR_TO_KEY = {
@ -930,29 +929,16 @@ def join_all() -> None:
machine.wait_for_shutdown()
def test_script() -> None:
exec(os.environ["testScript"])
def run_tests() -> None:
def run_tests(interactive: bool = False) -> None:
global machines
tests = os.environ.get("tests", None)
if tests is not None:
with log.nested("running the VM test script"):
try:
exec(tests, globals())
except Exception as e:
eprint("error: ")
traceback.print_exc()
sys.exit(1)
if interactive:
ptpython.repl.embed(globals(), locals())
else:
ptpython.repl.embed(locals(), globals())
# TODO: Collect coverage data
for machine in machines:
if machine.is_up():
machine.execute("sync")
test_script()
# TODO: Collect coverage data
for machine in machines:
if machine.is_up():
machine.execute("sync")
def serial_stdout_on() -> None:
@ -965,6 +951,31 @@ def serial_stdout_off() -> None:
log._print_serial_logs = False
class EnvDefault(argparse.Action):
"""An argpars Action that takes values from the specified
environment variable as the flags default value.
"""
def __init__(self, envvar, required=False, default=None, nargs=None, **kwargs): # type: ignore
if not default and envvar:
if envvar in os.environ:
if nargs is not None and (nargs.isdigit() or nargs in ["*", "+"]):
default = os.environ[envvar].split()
else:
default = os.environ[envvar]
kwargs["help"] = (
kwargs["help"] + f" (default from environment: {default})"
)
if required and default:
required = False
super(EnvDefault, self).__init__(
default=default, required=required, nargs=nargs, **kwargs
)
def __call__(self, parser, namespace, values, option_string=None): # type: ignore
setattr(namespace, self.dest, values)
@contextmanager
def subtest(name: str) -> Iterator[None]:
with log.nested(name):
@ -986,18 +997,52 @@ if __name__ == "__main__":
help="re-use a VM state coming from a previous run",
action="store_true",
)
(cli_args, vm_scripts) = arg_parser.parse_known_args()
arg_parser.add_argument(
"-I",
"--interactive",
help="drop into a python repl and run the tests interactively",
action="store_true",
)
arg_parser.add_argument(
"--start-scripts",
metavar="START-SCRIPT",
action=EnvDefault,
envvar="startScripts",
nargs="*",
help="start scripts for participating virtual machines",
)
arg_parser.add_argument(
"--vlans",
metavar="VLAN",
action=EnvDefault,
envvar="vlans",
nargs="*",
help="vlans to span by the driver",
)
arg_parser.add_argument(
"testscript",
action=EnvDefault,
envvar="testScript",
help="the test script to run",
type=pathlib.Path,
)
args = arg_parser.parse_args()
global test_script
def test_script() -> None:
with log.nested("running the VM test script"):
exec(pathlib.Path(args.testscript).read_text(), globals())
log = Logger()
vlan_nrs = list(dict.fromkeys(os.environ.get("VLANS", "").split()))
vde_sockets = [create_vlan(v) for v in vlan_nrs]
vde_sockets = [create_vlan(v) for v in args.vlans]
for nr, vde_socket, _, _ in vde_sockets:
os.environ["QEMU_VDE_SOCKET_{}".format(nr)] = vde_socket
machines = [
create_machine({"startCommand": s, "keepVmState": cli_args.keep_vm_state})
for s in vm_scripts
create_machine({"startCommand": s, "keepVmState": args.keep_vm_state})
for s in args.start_scripts
]
machine_eval = [
"{0} = machines[{1}]".format(m.name, idx) for idx, m in enumerate(machines)
@ -1017,6 +1062,6 @@ if __name__ == "__main__":
log.close()
tic = time.time()
run_tests()
run_tests(args.interactive)
toc = time.time()
print("test script finished in {:.2f}s".format(toc - tic))

View File

@ -83,7 +83,10 @@ rec {
''
mkdir -p $out
LOGFILE=/dev/null tests='exec(os.environ["testScript"])' ${driver}/bin/nixos-test-driver
# effectively mute the XMLLogger
export LOGFILE=/dev/null
${driver}/bin/nixos-test-driver
'';
passthru = driver.passthru // {
@ -130,9 +133,12 @@ rec {
nodeHostNames = map (c: c.config.system.name) (lib.attrValues nodes);
# TODO: This is an implementation error and needs fixing
# the testing famework cannot legitimately restrict hostnames further
# beyond RFC1035
invalidNodeNames = lib.filter
(node: builtins.match "^[A-z_]([A-z0-9_]+)?$" node == null)
(builtins.attrNames nodes);
nodeHostNames;
testScript' =
# Call the test script with the computed nodes.
@ -146,7 +152,9 @@ rec {
Cannot create machines out of (${lib.concatStringsSep ", " invalidNodeNames})!
All machines are referenced as python variables in the testing framework which will break the
script when special characters are used.
Please stick to alphanumeric chars and underscores as separation.
This is an IMPLEMENTATION ERROR and needs to be fixed. Meanwhile,
please stick to alphanumeric chars and underscores as separation.
''
else lib.warnIf skipLint "Linting is disabled" (runCommand testDriverName
{
@ -161,7 +169,10 @@ rec {
''
mkdir -p $out/bin
vmStartScripts=($(for i in ${toString vms}; do echo $i/bin/run-*-vm; done))
echo -n "$testScript" > $out/test-script
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/nixos-test-driver
${lib.optionalString (!skipLint) ''
PYFLAKES_BUILTINS="$(
echo -n ${lib.escapeShellArg (lib.concatStringsSep "," nodeHostNames)},
@ -169,17 +180,12 @@ rec {
)" ${python3Packages.pyflakes}/bin/pyflakes $out/test-script
''}
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/
vms=($(for i in ${toString vms}; do echo $i/bin/run-*-vm; done))
# set defaults through environment
# see: ./test-driver/test-driver.py argparse implementation
wrapProgram $out/bin/nixos-test-driver \
--add-flags "''${vms[*]}" \
--run "export testScript=\"\$(${coreutils}/bin/cat $out/test-script)\"" \
--set VLANS '${toString vlans}'
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/nixos-run-vms
wrapProgram $out/bin/nixos-run-vms \
--add-flags "''${vms[*]}" \
--set tests 'start_all(); join_all();' \
--set VLANS '${toString vlans}'
--set startScripts "''${vmStartScripts[*]}" \
--set testScript "$out/test-script" \
--set vlans '${toString vlans}'
'');
# Make a full-blown test

View File

@ -396,7 +396,7 @@ let
};
};
idsAreUnique = set: idAttr: !(fold (name: args@{ dup, acc }:
idsAreUnique = set: idAttr: !(foldr (name: args@{ dup, acc }:
let
id = builtins.toString (builtins.getAttr idAttr (builtins.getAttr name set));
exists = builtins.hasAttr id acc;

View File

@ -35,6 +35,14 @@ in {
'';
};
hardware.wirelessRegulatoryDatabase = mkOption {
default = false;
type = types.bool;
description = ''
Load the wireless regulatory database at boot.
'';
};
};
@ -50,6 +58,7 @@ in {
rtl8723bs-firmware
rtl8761b-firmware
rtw88-firmware
rtw89-firmware
zd1211fw
alsa-firmware
sof-firmware
@ -58,6 +67,7 @@ in {
++ optionals (versionOlder config.boot.kernelPackages.kernel.version "4.13") [
rtl8723bs-firmware
];
hardware.wirelessRegulatoryDatabase = true;
})
(mkIf cfg.enableAllFirmware {
assertions = [{
@ -75,5 +85,8 @@ in {
b43FirmwareCutter
] ++ optional (pkgs.stdenv.hostPlatform.isi686 || pkgs.stdenv.hostPlatform.isx86_64) facetimehd-firmware;
})
(mkIf cfg.wirelessRegulatoryDatabase {
hardware.firmware = [ pkgs.wireless-regdb ];
})
];
}

View File

@ -49,7 +49,9 @@ in
{
options = {
hardware.openrazer = {
enable = mkEnableOption "OpenRazer drivers and userspace daemon";
enable = mkEnableOption ''
OpenRazer drivers and userspace daemon.
'';
verboseLogging = mkOption {
type = types.bool;
@ -92,6 +94,15 @@ in
generate a heatmap.
'';
};
users = mkOption {
type = with types; listOf str;
default = [];
description = ''
Usernames to be added to the "openrazer" group, so that they
can start and interact with the OpenRazer userspace daemon.
'';
};
};
};
@ -106,10 +117,12 @@ in
services.udev.packages = [ kernelPackages.openrazer ];
services.dbus.packages = [ dbusServiceFile ];
# A user must be a member of the plugdev group in order to start
# the openrazer-daemon. Therefore we make sure that the plugdev
# group exists.
users.groups.plugdev = {};
# A user must be a member of the openrazer group in order to start
# the openrazer-daemon. Therefore we make sure that the group
# exists.
users.groups.openrazer = {
members = cfg.users;
};
systemd.user.services.openrazer-daemon = {
description = "Daemon to manage razer devices in userspace";

View File

@ -179,28 +179,41 @@ in
You cannot configure both an Intel iGPU and an AMD APU. Pick the one corresponding to your processor.
'';
}
{
assertion = primeEnabled -> pCfg.nvidiaBusId != "" && (pCfg.intelBusId != "" || pCfg.amdgpuBusId != "");
message = ''
When NVIDIA PRIME is enabled, the GPU bus IDs must configured.
'';
}
{
assertion = offloadCfg.enable -> versionAtLeast nvidia_x11.version "435.21";
message = "NVIDIA PRIME render offload is currently only supported on versions >= 435.21.";
}
{
assertion = !(syncCfg.enable && offloadCfg.enable);
message = "Only one NVIDIA PRIME solution may be used at a time.";
}
{
assertion = !(syncCfg.enable && cfg.powerManagement.finegrained);
message = "Sync precludes powering down the NVIDIA GPU.";
}
{
assertion = cfg.powerManagement.enable -> offloadCfg.enable;
message = "Fine-grained power management requires offload to be enabled.";
}
{
assertion = cfg.powerManagement.enable -> (
builtins.pathExists (cfg.package.out + "/bin/nvidia-sleep.sh") &&
builtins.pathExists (cfg.package.out + "/lib/systemd/system-sleep/nvidia")
);
message = "Required files for driver based power management don't exist.";
}
];
# If Optimus/PRIME is enabled, we:

View File

@ -654,7 +654,11 @@ in
];
fileSystems."/" =
{ fsType = "tmpfs";
# This module is often over-layed onto an existing host config
# that defines `/`. We use mkOverride 60 to override standard
# values, but at the same time leave room for mkForce values
# targeted at the image build.
{ fsType = mkOverride 60 "tmpfs";
options = [ "mode=0755" ];
};

View File

@ -30,7 +30,11 @@ with lib;
else [ pkgs.grub2 pkgs.syslinux ]);
fileSystems."/" =
{ fsType = "tmpfs";
# This module is often over-layed onto an existing host config
# that defines `/`. We use mkOverride 60 to override standard
# values, but at the same time leave room for mkForce values
# targeted at the image build.
{ fsType = mkOverride 60 "tmpfs";
options = [ "mode=0755" ];
};

View File

@ -1,7 +1,7 @@
{
x86_64-linux = "/nix/store/qsgz2hhn6mzlzp53a7pwf9z2pq3l5z6h-nix-2.3.14";
i686-linux = "/nix/store/1yw40bj04lykisw2jilq06lir3k9ga4a-nix-2.3.14";
aarch64-linux = "/nix/store/32yzwmynmjxfrkb6y6l55liaqdrgkj4a-nix-2.3.14";
x86_64-darwin = "/nix/store/06j0vi2d13w4l0p3jsigq7lk4x6gkycj-nix-2.3.14";
aarch64-darwin = "/nix/store/77wi7vpbrghw5rgws25w30bwb8yggnk9-nix-2.3.14";
x86_64-linux = "/nix/store/jhbxh1jwjc3hjhzs9y2hifdn0rmnfwaj-nix-2.3.15";
i686-linux = "/nix/store/9pspwnkdrgzma1l4xlv7arhwa56y16di-nix-2.3.15";
aarch64-linux = "/nix/store/72aqi5g7f4fhgvgafbcqwcpqjgnczj48-nix-2.3.15";
x86_64-darwin = "/nix/store/6p6qwp73dgfkqhynmxrzbx1lcfgfpqal-nix-2.3.15";
aarch64-darwin = "/nix/store/dmq2vksdhssgfl822shd0ky3x5x0klh4-nix-2.3.15";
}

View File

@ -258,8 +258,7 @@ in
environment.systemPackages = []
++ optional cfg.man.enable manual.manpages
++ optionals cfg.doc.enable ([ manual.manualHTML nixos-help ]
++ optionals config.services.xserver.enable [ pkgs.nixos-icons ]);
++ optionals cfg.doc.enable [ manual.manualHTML nixos-help ];
services.getty.helpLine = mkIf cfg.doc.enable (
"\nRun 'nixos-help' for the NixOS manual."

View File

@ -178,7 +178,7 @@ in
radvd = 139;
zookeeper = 140;
dnsmasq = 141;
uhub = 142;
#uhub = 142; # unused
yandexdisk = 143;
mxisd = 144; # was once collectd
consul = 145;
@ -187,6 +187,7 @@ in
#seeks = 148; # removed 2020-06-21
prosody = 149;
i2pd = 150;
systemd-coredump = 151;
systemd-network = 152;
systemd-resolve = 153;
systemd-timesync = 154;
@ -347,6 +348,8 @@ in
#mailman = 316; # removed 2019-08-30
zigbee2mqtt = 317;
# shadow = 318; # unused
hqplayer = 319;
moonraker = 320;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -649,6 +652,8 @@ in
#mailman = 316; # removed 2019-08-30
zigbee2mqtt = 317;
shadow = 318;
hqplayer = 319;
moonraker = 320;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View File

@ -39,7 +39,7 @@ let
if c x then true
else lib.traceSeqN 1 x false;
in traceXIfNot isConfig;
merge = args: fold (def: mergeConfig def.value) {};
merge = args: foldr (def: mergeConfig def.value) {};
};
overlayType = mkOptionType {

View File

@ -103,9 +103,10 @@ in
''
NAME=NixOS
ID=nixos
VERSION="${cfg.version} (${cfg.codeName})"
VERSION="${cfg.release} (${cfg.codeName})"
VERSION_CODENAME=${toLower cfg.codeName}
VERSION_ID="${cfg.version}"
VERSION_ID="${cfg.release}"
BUILD_ID="${cfg.version}"
PRETTY_NAME="NixOS ${cfg.release} (${cfg.codeName})"
LOGO="nix-snowflake"
HOME_URL="https://nixos.org/"

View File

@ -236,6 +236,7 @@
./security/doas.nix
./security/systemd-confinement.nix
./security/tpm2.nix
./services/admin/meshcentral.nix
./services/admin/oxidized.nix
./services/admin/salt/master.nix
./services/admin/salt/minion.nix
@ -243,13 +244,16 @@
./services/amqp/rabbitmq.nix
./services/audio/alsa.nix
./services/audio/botamusique.nix
./services/audio/jack.nix
./services/audio/hqplayerd.nix
./services/audio/icecast.nix
./services/audio/jack.nix
./services/audio/jmusicbot.nix
./services/audio/liquidsoap.nix
./services/audio/mpd.nix
./services/audio/mpdscribble.nix
./services/audio/mopidy.nix
./services/audio/networkaudiod.nix
./services/audio/roon-bridge.nix
./services/audio/roon-server.nix
./services/audio/slimserver.nix
./services/audio/snapserver.nix
@ -317,6 +321,7 @@
./services/databases/foundationdb.nix
./services/databases/hbase.nix
./services/databases/influxdb.nix
./services/databases/influxdb2.nix
./services/databases/memcached.nix
./services/databases/monetdb.nix
./services/databases/mongodb.nix
@ -519,6 +524,7 @@
./services/misc/logkeys.nix
./services/misc/leaps.nix
./services/misc/lidarr.nix
./services/misc/libreddit.nix
./services/misc/lifecycled.nix
./services/misc/mame.nix
./services/misc/matrix-appservice-discord.nix
@ -528,8 +534,11 @@
./services/misc/mbpfan.nix
./services/misc/mediatomb.nix
./services/misc/metabase.nix
./services/misc/moonraker.nix
./services/misc/mwlib.nix
./services/misc/mx-puppet-discord.nix
./services/misc/n8n.nix
./services/misc/nitter.nix
./services/misc/nix-daemon.nix
./services/misc/nix-gc.nix
./services/misc/nix-optimise.nix
@ -633,6 +642,7 @@
./services/network-filesystems/glusterfs.nix
./services/network-filesystems/kbfs.nix
./services/network-filesystems/ipfs.nix
./services/network-filesystems/litestream/default.nix
./services/network-filesystems/netatalk.nix
./services/network-filesystems/nfsd.nix
./services/network-filesystems/openafs/client.nix
@ -929,6 +939,7 @@
./services/wayland/cage.nix
./services/video/epgstation/default.nix
./services/video/mirakurun.nix
./services/video/replay-sorcery.nix
./services/web-apps/atlassian/confluence.nix
./services/web-apps/atlassian/crowd.nix
./services/web-apps/atlassian/jira.nix
@ -949,6 +960,7 @@
./services/web-apps/icingaweb2/icingaweb2.nix
./services/web-apps/icingaweb2/module-monitoring.nix
./services/web-apps/ihatemoney
./services/web-apps/isso.nix
./services/web-apps/jirafeau.nix
./services/web-apps/jitsi-meet.nix
./services/web-apps/keycloak.nix
@ -960,6 +972,7 @@
./services/web-apps/moodle.nix
./services/web-apps/nextcloud.nix
./services/web-apps/nexus.nix
./services/web-apps/node-red.nix
./services/web-apps/plantuml-server.nix
./services/web-apps/plausible.nix
./services/web-apps/pgpkeyserver-lite.nix

View File

@ -27,6 +27,7 @@ in
browser = mkOption {
type = types.str;
default = concatStringsSep " " [
''env XDG_CONFIG_HOME="$PREV_CONFIG_HOME"''
''${pkgs.chromium}/bin/chromium''
''--user-data-dir=''${XDG_DATA_HOME:-$HOME/.local/share}/chromium-captive''
''--proxy-server="socks5://$PROXY"''
@ -111,6 +112,7 @@ in
security.wrappers.captive-browser = {
capabilities = "cap_net_raw+p";
source = pkgs.writeShellScript "captive-browser" ''
export PREV_CONFIG_HOME="$XDG_CONFIG_HOME"
export XDG_CONFIG_HOME=${pkgs.writeTextDir "captive-browser.toml" ''
browser = """${cfg.browser}"""
dhcp-dns = """${cfg.dhcp-dns}"""

View File

@ -7,18 +7,7 @@ let
runtime' = filter (f: f.enable) (attrValues cfg.runtime);
# taken from the etc module
runtime = pkgs.stdenvNoCC.mkDerivation {
name = "runtime";
builder = ../system/etc/make-etc.sh;
preferLocalBuild = true;
allowSubstitutes = false;
sources = map (x: x.source) runtime';
targets = map (x: x.target) runtime';
};
runtime = pkgs.linkFarm "neovim-runtime" (map (x: { name = x.target; path = x.source; }) runtime');
in {
options.programs.neovim = {

View File

@ -14,7 +14,7 @@ let
''
#! ${pkgs.runtimeShell} -e
export DISPLAY="$(systemctl --user show-environment | ${pkgs.gnused}/bin/sed 's/^DISPLAY=\(.*\)/\1/; t; d')"
exec ${askPassword}
exec ${askPassword} "$@"
'';
knownHosts = map (h: getAttr h cfg.knownHosts) (attrNames cfg.knownHosts);

View File

@ -10,8 +10,5 @@ in {
config = mkIf cfg.enable {
security.wrappers.udevil.source = "${lib.getBin pkgs.udevil}/bin/udevil";
systemd.packages = [ pkgs.udevil ];
systemd.services."devmon@".wantedBy = [ "multi-user.target" ];
};
}

View File

@ -278,7 +278,10 @@ in
fi
'';
environment.etc.zinputrc.source = ./zinputrc;
# Bug in nix flakes:
# If we use `.source` here the path is garbage collected also we point to it with a symlink
# see https://github.com/NixOS/nixpkgs/issues/132732
environment.etc.zinputrc.text = builtins.readFile ./zinputrc;
environment.systemPackages =
let

View File

@ -21,15 +21,51 @@ let
# The Group can vary depending on what the user has specified in
# security.acme.certs.<cert>.group on some of the services.
commonServiceConfig = {
Type = "oneshot";
User = "acme";
Group = mkDefault "acme";
UMask = 0022;
StateDirectoryMode = 750;
ProtectSystem = "full";
PrivateTmp = true;
Type = "oneshot";
User = "acme";
Group = mkDefault "acme";
UMask = 0022;
StateDirectoryMode = 750;
ProtectSystem = "strict";
ReadWritePaths = [
"/var/lib/acme"
];
PrivateTmp = true;
WorkingDirectory = "/tmp";
WorkingDirectory = "/tmp";
CapabilityBoundingSet = [ "" ];
DevicePolicy = "closed";
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
PrivateDevices = true;
ProtectClock = true;
ProtectHome = true;
ProtectHostname = true;
ProtectControlGroups = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectProc = "invisible";
ProcSubset = "pid";
RemoveIPC = true;
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter = [
# 1. allow a reasonable set of syscalls
"@system-service"
# 2. and deny unreasonable ones
"~@privileged @resources"
# 3. then allow the required subset within denied groups
"@chown"
];
};
# In order to avoid race conditions creating the CA for selfsigned certs,

View File

@ -406,7 +406,7 @@ let
${let oath = config.security.pam.oath; in optionalString cfg.oathAuth
"auth requisite ${pkgs.oathToolkit}/lib/security/pam_oath.so window=${toString oath.window} usersfile=${toString oath.usersFile} digits=${toString oath.digits}"}
${let yubi = config.security.pam.yubico; in optionalString cfg.yubicoAuth
"auth ${yubi.control} ${pkgs.yubico-pam}/lib/security/pam_yubico.so mode=${toString yubi.mode} ${optionalString (yubi.mode == "client") "id=${toString yubi.id}"} ${optionalString yubi.debug "debug"}"}
"auth ${yubi.control} ${pkgs.yubico-pam}/lib/security/pam_yubico.so mode=${toString yubi.mode} ${optionalString (yubi.challengeResponsePath != null) "chalresp_path=${yubi.challengeResponsePath}"} ${optionalString (yubi.mode == "client") "id=${toString yubi.id}"} ${optionalString yubi.debug "debug"}"}
${optionalString cfg.fprintAuth
"auth sufficient ${pkgs.fprintd}/lib/security/pam_fprintd.so"}
'' +
@ -822,6 +822,16 @@ in
Challenge-Response configurations. See the man-page ykpamcfg(1) for further
details on how to configure offline Challenge-Response validation.
More information can be found <link
xlink:href="https://developers.yubico.com/yubico-pam/Authentication_Using_Challenge-Response.html">here</link>.
'';
};
challengeResponsePath = mkOption {
default = null;
type = types.path;
description = ''
If not null, set the path used by yubico pam module where the challenge expected response is stored.
More information can be found <link
xlink:href="https://developers.yubico.com/yubico-pam/Authentication_Using_Challenge-Response.html">here</link>.
'';

View File

@ -96,8 +96,10 @@ in
users.users.polkituser = {
description = "PolKit daemon";
uid = config.ids.uids.polkituser;
group = "polkituser";
};
users.groups.polkituser = {};
};
}

View File

@ -0,0 +1,53 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.meshcentral;
configFormat = pkgs.formats.json {};
configFile = configFormat.generate "meshcentral-config.json" cfg.settings;
in with lib; {
options.services.meshcentral = with types; {
enable = mkEnableOption "MeshCentral computer management server";
package = mkOption {
description = "MeshCentral package to use. Replacing this may be necessary to add dependencies for extra functionality.";
type = types.package;
default = pkgs.meshcentral;
defaultText = "pkgs.meshcentral";
};
settings = mkOption {
description = ''
Settings for MeshCentral. Refer to upstream documentation for details:
<itemizedlist>
<listitem><para><link xlink:href="https://github.com/Ylianst/MeshCentral/blob/master/meshcentral-config-schema.json">JSON Schema definition</link></para></listitem>
<listitem><para><link xlink:href="https://github.com/Ylianst/MeshCentral/blob/master/sample-config.json">simple sample configuration</link></para></listitem>
<listitem><para><link xlink:href="https://github.com/Ylianst/MeshCentral/blob/master/sample-config-advanced.json">complex sample configuration</link></para></listitem>
<listitem><para><link xlink:href="https://www.meshcommander.com/meshcentral2">Old homepage) with documentation link</link></para></listitem>
</itemizedlist>
'';
type = types.submodule {
freeformType = configFormat.type;
};
example = {
settings = {
WANonly = true;
Cert = "meshcentral.example.com";
TlsOffload = "10.0.0.2,fd42::2";
Port = 4430;
};
domains."".certUrl = "https://meshcentral.example.com/";
};
};
};
config = mkIf cfg.enable {
services.meshcentral.settings.settings.autoBackup.backupPath = lib.mkDefault "/var/lib/meshcentral/backups";
systemd.services.meshcentral = {
wantedBy = ["multi-user.target"];
serviceConfig = {
ExecStart = "${cfg.package}/bin/meshcentral --datapath /var/lib/meshcentral --configfile ${configFile}";
DynamicUser = true;
StateDirectory = "meshcentral";
CacheDirectory = "meshcentral";
};
};
};
meta.maintainers = [ maintainers.lheckemann ];
}

View File

@ -0,0 +1,129 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.hqplayerd;
pkg = pkgs.hqplayerd;
# XXX: This is hard-coded in the distributed binary, don't try to change it.
stateDir = "/var/lib/hqplayer";
configDir = "/etc/hqplayer";
in
{
options = {
services.hqplayerd = {
enable = mkEnableOption "HQPlayer Embedded";
licenseFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Path to the HQPlayer license key file.
Without this, the service will run in trial mode and restart every 30
minutes.
'';
};
auth = {
username = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Username used for HQPlayer's WebUI.
Without this you will need to manually create the credentials after
first start by going to http://your.ip/8088/auth
'';
};
password = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Password used for HQPlayer's WebUI.
Without this you will need to manually create the credentials after
first start by going to http://your.ip/8088/auth
'';
};
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Open TCP port 8088 in the firewall for the server.
'';
};
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = (cfg.auth.username != null -> cfg.auth.password != null)
&& (cfg.auth.password != null -> cfg.auth.username != null);
message = "You must set either both services.hqplayer.auth.username and password, or neither.";
}
];
environment = {
etc = {
"hqplayer/hqplayerd4-key.xml" = mkIf (cfg.licenseFile != null) { source = cfg.licenseFile; };
"modules-load.d/taudio2.conf".source = "${pkg}/etc/modules-load.d/taudio2.conf";
};
systemPackages = [ pkg ];
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ 8088 ];
};
services.udev.packages = [ pkg ];
systemd = {
tmpfiles.rules = [
"d ${configDir} 0755 hqplayer hqplayer - -"
"d ${stateDir} 0755 hqplayer hqplayer - -"
"d ${stateDir}/home 0755 hqplayer hqplayer - -"
];
packages = [ pkg ];
services.hqplayerd = {
wantedBy = [ "multi-user.target" ];
after = [ "systemd-tmpfiles-setup.service" ];
environment.HOME = "${stateDir}/home";
unitConfig.ConditionPathExists = [ configDir stateDir ];
preStart = ''
cp -r "${pkg}/var/lib/hqplayer/web" "${stateDir}"
chmod -R u+wX "${stateDir}/web"
if [ ! -f "${configDir}/hqplayerd.xml" ]; then
echo "creating initial config file"
install -m 0644 "${pkg}/etc/hqplayer/hqplayerd.xml" "${configDir}/hqplayerd.xml"
fi
'' + optionalString (cfg.auth.username != null && cfg.auth.password != null) ''
${pkg}/bin/hqplayerd -s ${cfg.auth.username} ${cfg.auth.password}
'';
};
};
users.groups = {
hqplayer.gid = config.ids.gids.hqplayer;
};
users.users = {
hqplayer = {
description = "hqplayer daemon user";
extraGroups = [ "audio" ];
group = "hqplayer";
uid = config.ids.uids.hqplayer;
};
};
};
}

View File

@ -0,0 +1,19 @@
{ config, lib, pkgs, ... }:
with lib;
let
name = "networkaudiod";
cfg = config.services.networkaudiod;
in {
options = {
services.networkaudiod = {
enable = mkEnableOption "Networkaudiod (NAA)";
};
};
config = mkIf cfg.enable {
systemd.packages = [ pkgs.networkaudiod ];
systemd.services.networkaudiod.wantedBy = [ "multi-user.target" ];
};
}

View File

@ -7,28 +7,49 @@ let
cfg = config.services.postgresqlBackup;
postgresqlBackupService = db: dumpCmd:
{
let
compressSuffixes = {
"none" = "";
"gzip" = ".gz";
"zstd" = ".zstd";
};
compressSuffix = getAttr cfg.compression compressSuffixes;
compressCmd = getAttr cfg.compression {
"none" = "cat";
"gzip" = "${pkgs.gzip}/bin/gzip -c";
"zstd" = "${pkgs.zstd}/bin/zstd -c";
};
mkSqlPath = prefix: suffix: "${cfg.location}/${db}${prefix}.sql${suffix}";
curFile = mkSqlPath "" compressSuffix;
prevFile = mkSqlPath ".prev" compressSuffix;
prevFiles = map (mkSqlPath ".prev") (attrValues compressSuffixes);
inProgressFile = mkSqlPath ".in-progress" compressSuffix;
in {
enable = true;
description = "Backup of ${db} database(s)";
requires = [ "postgresql.service" ];
path = [ pkgs.coreutils pkgs.gzip config.services.postgresql.package ];
path = [ pkgs.coreutils config.services.postgresql.package ];
script = ''
set -e -o pipefail
umask 0077 # ensure backup is only readable by postgres user
if [ -e ${cfg.location}/${db}.sql.gz ]; then
mv ${cfg.location}/${db}.sql.gz ${cfg.location}/${db}.prev.sql.gz
if [ -e ${curFile} ]; then
rm -f ${toString prevFiles}
mv ${curFile} ${prevFile}
fi
${dumpCmd} | \
gzip -c > ${cfg.location}/${db}.in-progress.sql.gz
${dumpCmd} \
| ${compressCmd} \
> ${inProgressFile}
mv ${cfg.location}/${db}.in-progress.sql.gz ${cfg.location}/${db}.sql.gz
mv ${inProgressFile} ${curFile}
'';
serviceConfig = {
@ -87,7 +108,7 @@ in {
default = "/var/backup/postgresql";
type = types.path;
description = ''
Location to put the gzipped PostgreSQL database dumps.
Path of directory where the PostgreSQL database dumps will be placed.
'';
};
@ -101,6 +122,14 @@ in {
when no databases where specified.
'';
};
compression = mkOption {
type = types.enum ["none" "gzip" "zstd"];
default = "gzip";
description = ''
The type of compression to use on the generated database dump.
'';
};
};
};

View File

@ -52,7 +52,7 @@ let
use_template = mkOption {
description = "Names of the templates to use for this dataset.";
type = types.listOf (types.enum (attrNames cfg.templates));
default = [];
default = [ ];
};
useTemplate = use_template;
@ -70,116 +70,127 @@ let
processChildrenOnly = process_children_only;
};
# Extract pool names from configured datasets
pools = unique (map (d: head (builtins.match "([^/]+).*" d)) (attrNames cfg.datasets));
# Extract unique dataset names
datasets = unique (attrNames cfg.datasets);
configFile = let
mkValueString = v:
if builtins.isList v then concatStringsSep "," v
else generators.mkValueStringDefault {} v;
# Function to build "zfs allow" and "zfs unallow" commands for the
# filesystems we've delegated permissions to.
buildAllowCommand = zfsAction: permissions: dataset: lib.escapeShellArgs [
# Here we explicitly use the booted system to guarantee the stable API needed by ZFS
"-+/run/booted-system/sw/bin/zfs"
zfsAction
"sanoid"
(concatStringsSep "," permissions)
dataset
];
mkKeyValue = k: v: if v == null then ""
else if k == "processChildrenOnly" then ""
else if k == "useTemplate" then ""
else generators.mkKeyValueDefault { inherit mkValueString; } "=" k v;
in generators.toINI { inherit mkKeyValue; } cfg.settings;
configFile =
let
mkValueString = v:
if builtins.isList v then concatStringsSep "," v
else generators.mkValueStringDefault { } v;
in {
mkKeyValue = k: v:
if v == null then ""
else if k == "processChildrenOnly" then ""
else if k == "useTemplate" then ""
else generators.mkKeyValueDefault { inherit mkValueString; } "=" k v;
in
generators.toINI { inherit mkKeyValue; } cfg.settings;
# Interface
in
{
options.services.sanoid = {
enable = mkEnableOption "Sanoid ZFS snapshotting service";
# Interface
interval = mkOption {
type = types.str;
default = "hourly";
example = "daily";
description = ''
Run sanoid at this interval. The default is to run hourly.
options.services.sanoid = {
enable = mkEnableOption "Sanoid ZFS snapshotting service";
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
interval = mkOption {
type = types.str;
default = "hourly";
example = "daily";
description = ''
Run sanoid at this interval. The default is to run hourly.
datasets = mkOption {
type = types.attrsOf (types.submodule ({config, options, ...}: {
freeformType = datasetSettingsType;
options = commonOptions // datasetOptions;
config.use_template = mkAliasDefinitions (mkDefault options.useTemplate or {});
config.process_children_only = mkAliasDefinitions (mkDefault options.processChildrenOnly or {});
}));
default = {};
description = "Datasets to snapshot.";
};
templates = mkOption {
type = types.attrsOf (types.submodule {
freeformType = datasetSettingsType;
options = commonOptions;
});
default = {};
description = "Templates for datasets.";
};
settings = mkOption {
type = types.attrsOf datasetSettingsType;
description = ''
Free-form settings written directly to the config file. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/blob/master/sanoid.defaults.conf"/>
for allowed values.
'';
};
extraArgs = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--verbose" "--readonly" "--debug" ];
description = ''
Extra arguments to pass to sanoid. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/#sanoid-command-line-options"/>
for allowed options.
'';
};
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
# Implementation
config = mkIf cfg.enable {
services.sanoid.settings = mkMerge [
(mapAttrs' (d: v: nameValuePair ("template_" + d) v) cfg.templates)
(mapAttrs (d: v: v) cfg.datasets)
];
systemd.services.sanoid = {
description = "Sanoid snapshot service";
serviceConfig = {
ExecStartPre = map (pool: lib.escapeShellArgs [
"+/run/booted-system/sw/bin/zfs" "allow"
"sanoid" "snapshot,mount,destroy" pool
]) pools;
ExecStart = lib.escapeShellArgs ([
"${pkgs.sanoid}/bin/sanoid"
"--cron"
"--configdir" (pkgs.writeTextDir "sanoid.conf" configFile)
] ++ cfg.extraArgs);
ExecStopPost = map (pool: lib.escapeShellArgs [
"+/run/booted-system/sw/bin/zfs" "unallow" "sanoid" pool
]) pools;
User = "sanoid";
Group = "sanoid";
DynamicUser = true;
RuntimeDirectory = "sanoid";
CacheDirectory = "sanoid";
};
# Prevents missing snapshots during DST changes
environment.TZ = "UTC";
after = [ "zfs.target" ];
startAt = cfg.interval;
};
datasets = mkOption {
type = types.attrsOf (types.submodule ({ config, options, ... }: {
freeformType = datasetSettingsType;
options = commonOptions // datasetOptions;
config.use_template = mkAliasDefinitions (mkDefault options.useTemplate or { });
config.process_children_only = mkAliasDefinitions (mkDefault options.processChildrenOnly or { });
}));
default = { };
description = "Datasets to snapshot.";
};
meta.maintainers = with maintainers; [ lopsided98 ];
}
templates = mkOption {
type = types.attrsOf (types.submodule {
freeformType = datasetSettingsType;
options = commonOptions;
});
default = { };
description = "Templates for datasets.";
};
settings = mkOption {
type = types.attrsOf datasetSettingsType;
description = ''
Free-form settings written directly to the config file. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/blob/master/sanoid.defaults.conf"/>
for allowed values.
'';
};
extraArgs = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "--verbose" "--readonly" "--debug" ];
description = ''
Extra arguments to pass to sanoid. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/#sanoid-command-line-options"/>
for allowed options.
'';
};
};
# Implementation
config = mkIf cfg.enable {
services.sanoid.settings = mkMerge [
(mapAttrs' (d: v: nameValuePair ("template_" + d) v) cfg.templates)
(mapAttrs (d: v: v) cfg.datasets)
];
systemd.services.sanoid = {
description = "Sanoid snapshot service";
serviceConfig = {
ExecStartPre = (map (buildAllowCommand "allow" [ "snapshot" "mount" "destroy" ]) datasets);
ExecStopPost = (map (buildAllowCommand "unallow" [ "snapshot" "mount" "destroy" ]) datasets);
ExecStart = lib.escapeShellArgs ([
"${pkgs.sanoid}/bin/sanoid"
"--cron"
"--configdir"
(pkgs.writeTextDir "sanoid.conf" configFile)
] ++ cfg.extraArgs);
User = "sanoid";
Group = "sanoid";
DynamicUser = true;
RuntimeDirectory = "sanoid";
CacheDirectory = "sanoid";
};
# Prevents missing snapshots during DST changes
environment.TZ = "UTC";
after = [ "zfs.target" ];
startAt = cfg.interval;
};
};
meta.maintainers = with maintainers; [ lopsided98 ];
}

View File

@ -5,226 +5,243 @@ with lib;
let
cfg = config.services.syncoid;
# Extract the pool name of a local dataset (any dataset not containing "@")
localPoolName = d: optionals (d != null) (
let m = builtins.match "([^/@]+)[^@]*" d; in
optionals (m != null) m);
# Extract local dasaset names (so no datasets containing "@")
localDatasetName = d: optionals (d != null) (
let m = builtins.match "([^/@]+[^@]*)" d; in
optionals (m != null) m
);
# Escape as required by: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
escapeUnitName = name:
lib.concatMapStrings (s: if lib.isList s then "-" else s)
(builtins.split "[^a-zA-Z0-9_.\\-]+" name);
in {
(builtins.split "[^a-zA-Z0-9_.\\-]+" name);
# Interface
# Function to build "zfs allow" and "zfs unallow" commands for the
# filesystems we've delegated permissions to.
buildAllowCommand = zfsAction: permissions: dataset: lib.escapeShellArgs [
# Here we explicitly use the booted system to guarantee the stable API needed by ZFS
"-+/run/booted-system/sw/bin/zfs"
zfsAction
cfg.user
(concatStringsSep "," permissions)
dataset
];
in
{
options.services.syncoid = {
enable = mkEnableOption "Syncoid ZFS synchronization service";
# Interface
interval = mkOption {
type = types.str;
default = "hourly";
example = "*-*-* *:15:00";
description = ''
Run syncoid at this interval. The default is to run hourly.
options.services.syncoid = {
enable = mkEnableOption "Syncoid ZFS synchronization service";
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
interval = mkOption {
type = types.str;
default = "hourly";
example = "*-*-* *:15:00";
description = ''
Run syncoid at this interval. The default is to run hourly.
user = mkOption {
type = types.str;
default = "syncoid";
example = "backup";
description = ''
The user for the service. ZFS privilege delegation will be
automatically configured for any local pools used by syncoid if this
option is set to a user other than root. The user will be given the
"hold" and "send" privileges on any pool that has datasets being sent
and the "create", "mount", "receive", and "rollback" privileges on
any pool that has datasets being received.
'';
};
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
group = mkOption {
type = types.str;
default = "syncoid";
example = "backup";
description = "The group for the service.";
};
user = mkOption {
type = types.str;
default = "syncoid";
example = "backup";
description = ''
The user for the service. ZFS privilege delegation will be
automatically configured for any local pools used by syncoid if this
option is set to a user other than root. The user will be given the
"hold" and "send" privileges on any pool that has datasets being sent
and the "create", "mount", "receive", and "rollback" privileges on
any pool that has datasets being received.
'';
};
sshKey = mkOption {
type = types.nullOr types.path;
# Prevent key from being copied to store
apply = mapNullable toString;
default = null;
description = ''
SSH private key file to use to login to the remote system. Can be
overridden in individual commands.
'';
};
group = mkOption {
type = types.str;
default = "syncoid";
example = "backup";
description = "The group for the service.";
};
commonArgs = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--no-sync-snap" ];
description = ''
Arguments to add to every syncoid command, unless disabled for that
command. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/#syncoid-command-line-options"/>
for available options.
'';
};
sshKey = mkOption {
type = types.nullOr types.path;
# Prevent key from being copied to store
apply = mapNullable toString;
default = null;
description = ''
SSH private key file to use to login to the remote system. Can be
overridden in individual commands.
'';
};
service = mkOption {
type = types.attrs;
default = {};
description = ''
Systemd configuration common to all syncoid services.
'';
};
commonArgs = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "--no-sync-snap" ];
description = ''
Arguments to add to every syncoid command, unless disabled for that
command. See
<link xlink:href="https://github.com/jimsalterjrs/sanoid/#syncoid-command-line-options"/>
for available options.
'';
};
commands = mkOption {
type = types.attrsOf (types.submodule ({ name, ... }: {
options = {
source = mkOption {
type = types.str;
example = "pool/dataset";
description = ''
Source ZFS dataset. Can be either local or remote. Defaults to
the attribute name.
'';
};
service = mkOption {
type = types.attrs;
default = { };
description = ''
Systemd configuration common to all syncoid services.
'';
};
target = mkOption {
type = types.str;
example = "user@server:pool/dataset";
description = ''
Target ZFS dataset. Can be either local
(<replaceable>pool/dataset</replaceable>) or remote
(<replaceable>user@server:pool/dataset</replaceable>).
'';
};
recursive = mkEnableOption ''the transfer of child datasets'';
sshKey = mkOption {
type = types.nullOr types.path;
# Prevent key from being copied to store
apply = mapNullable toString;
description = ''
SSH private key file to use to login to the remote system.
Defaults to <option>services.syncoid.sshKey</option> option.
'';
};
sendOptions = mkOption {
type = types.separatedString " ";
default = "";
example = "Lc e";
description = ''
Advanced options to pass to zfs send. Options are specified
without their leading dashes and separated by spaces.
'';
};
recvOptions = mkOption {
type = types.separatedString " ";
default = "";
example = "ux recordsize o compression=lz4";
description = ''
Advanced options to pass to zfs recv. Options are specified
without their leading dashes and separated by spaces.
'';
};
useCommonArgs = mkOption {
type = types.bool;
default = true;
description = ''
Whether to add the configured common arguments to this command.
'';
};
service = mkOption {
type = types.attrs;
default = {};
description = ''
Systemd configuration specific to this syncoid service.
'';
};
extraArgs = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--sshport 2222" ];
description = "Extra syncoid arguments for this command.";
};
commands = mkOption {
type = types.attrsOf (types.submodule ({ name, ... }: {
options = {
source = mkOption {
type = types.str;
example = "pool/dataset";
description = ''
Source ZFS dataset. Can be either local or remote. Defaults to
the attribute name.
'';
};
config = {
source = mkDefault name;
sshKey = mkDefault cfg.sshKey;
target = mkOption {
type = types.str;
example = "user@server:pool/dataset";
description = ''
Target ZFS dataset. Can be either local
(<replaceable>pool/dataset</replaceable>) or remote
(<replaceable>user@server:pool/dataset</replaceable>).
'';
};
}));
default = {};
example = literalExample ''
{
"pool/test".target = "root@target:pool/test";
}
'';
description = "Syncoid commands to run.";
recursive = mkEnableOption ''the transfer of child datasets'';
sshKey = mkOption {
type = types.nullOr types.path;
# Prevent key from being copied to store
apply = mapNullable toString;
description = ''
SSH private key file to use to login to the remote system.
Defaults to <option>services.syncoid.sshKey</option> option.
'';
};
sendOptions = mkOption {
type = types.separatedString " ";
default = "";
example = "Lc e";
description = ''
Advanced options to pass to zfs send. Options are specified
without their leading dashes and separated by spaces.
'';
};
recvOptions = mkOption {
type = types.separatedString " ";
default = "";
example = "ux recordsize o compression=lz4";
description = ''
Advanced options to pass to zfs recv. Options are specified
without their leading dashes and separated by spaces.
'';
};
useCommonArgs = mkOption {
type = types.bool;
default = true;
description = ''
Whether to add the configured common arguments to this command.
'';
};
service = mkOption {
type = types.attrs;
default = { };
description = ''
Systemd configuration specific to this syncoid service.
'';
};
extraArgs = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "--sshport 2222" ];
description = "Extra syncoid arguments for this command.";
};
};
config = {
source = mkDefault name;
sshKey = mkDefault cfg.sshKey;
};
}));
default = { };
example = literalExample ''
{
"pool/test".target = "root@target:pool/test";
}
'';
description = "Syncoid commands to run.";
};
};
# Implementation
config = mkIf cfg.enable {
users = {
users = mkIf (cfg.user == "syncoid") {
syncoid = {
group = cfg.group;
isSystemUser = true;
# For syncoid to be able to create /var/lib/syncoid/.ssh/
# and to use custom ssh_config or known_hosts.
home = "/var/lib/syncoid";
createHome = false;
};
};
groups = mkIf (cfg.group == "syncoid") {
syncoid = { };
};
};
# Implementation
config = mkIf cfg.enable {
users = {
users = mkIf (cfg.user == "syncoid") {
syncoid = {
group = cfg.group;
isSystemUser = true;
# For syncoid to be able to create /var/lib/syncoid/.ssh/
# and to use custom ssh_config or known_hosts.
home = "/var/lib/syncoid";
createHome = false;
};
};
groups = mkIf (cfg.group == "syncoid") {
syncoid = {};
};
};
systemd.services = mapAttrs' (name: c:
systemd.services = mapAttrs'
(name: c:
nameValuePair "syncoid-${escapeUnitName name}" (mkMerge [
{ description = "Syncoid ZFS synchronization from ${c.source} to ${c.target}";
{
description = "Syncoid ZFS synchronization from ${c.source} to ${c.target}";
after = [ "zfs.target" ];
startAt = cfg.interval;
# syncoid may need zpool to get feature@extensible_dataset
path = [ "/run/booted-system/sw/bin/" ];
serviceConfig = {
ExecStartPre =
map (pool: lib.escapeShellArgs [
"+/run/booted-system/sw/bin/zfs" "allow"
cfg.user "bookmark,hold,send,snapshot,destroy" pool
# Permissions snapshot and destroy are in case --no-sync-snap is not used
]) (localPoolName c.source) ++
map (pool: lib.escapeShellArgs [
"+/run/booted-system/sw/bin/zfs" "allow"
cfg.user "create,mount,receive,rollback" pool
]) (localPoolName c.target);
# Permissions snapshot and destroy are in case --no-sync-snap is not used
(map (buildAllowCommand "allow" [ "bookmark" "hold" "send" "snapshot" "destroy" ]) (localDatasetName c.source)) ++
(map (buildAllowCommand "allow" [ "create" "mount" "receive" "rollback" ]) (localDatasetName c.target));
ExecStopPost =
# Permissions snapshot and destroy are in case --no-sync-snap is not used
(map (buildAllowCommand "unallow" [ "bookmark" "hold" "send" "snapshot" "destroy" ]) (localDatasetName c.source)) ++
(map (buildAllowCommand "unallow" [ "create" "mount" "receive" "rollback" ]) (localDatasetName c.target));
ExecStart = lib.escapeShellArgs ([ "${pkgs.sanoid}/bin/syncoid" ]
++ optionals c.useCommonArgs cfg.commonArgs
++ optional c.recursive "-r"
++ optionals (c.sshKey != null) [ "--sshkey" c.sshKey ]
++ c.extraArgs
++ [ "--sendoptions" c.sendOptions
"--recvoptions" c.recvOptions
"--no-privilege-elevation"
c.source c.target
]);
++ [
"--sendoptions"
c.sendOptions
"--recvoptions"
c.recvOptions
"--no-privilege-elevation"
c.source
c.target
]);
User = cfg.user;
Group = cfg.group;
StateDirectory = [ "syncoid" ];
@ -240,7 +257,7 @@ in {
# systemd-analyze security | grep syncoid-'*'
AmbientCapabilities = "";
CapabilityBoundingSet = "";
DeviceAllow = ["/dev/zfs"];
DeviceAllow = [ "/dev/zfs" ];
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
@ -266,7 +283,7 @@ in {
BindPaths = [ "/dev/zfs" ];
BindReadOnlyPaths = [ builtins.storeDir "/etc" "/run" "/bin/sh" ];
# Avoid useless mounting of RootDirectory= in the own RootDirectory= of ExecStart='s mount namespace.
InaccessiblePaths = ["-+/run/syncoid/${escapeUnitName name}"];
InaccessiblePaths = [ "-+/run/syncoid/${escapeUnitName name}" ];
MountAPIVFS = true;
# Create RootDirectory= in the host's mount namespace.
RuntimeDirectory = [ "syncoid/${escapeUnitName name}" ];
@ -277,8 +294,14 @@ in {
# perf stat -x, 2>perf.log -e 'syscalls:sys_enter_*' syncoid …
# awk >perf.syscalls -F "," '$1 > 0 {sub("syscalls:sys_enter_","",$3); print $3}' perf.log
# systemd-analyze syscall-filter | grep -v -e '#' | sed -e ':loop; /^[^ ]/N; s/\n //; t loop' | grep $(printf ' -e \\<%s\\>' $(cat perf.syscalls)) | cut -f 1 -d ' '
"~@aio" "~@chown" "~@keyring" "~@memlock" "~@privileged"
"~@resources" "~@setuid" "~@sync" "~@timer"
"~@aio"
"~@chown"
"~@keyring"
"~@memlock"
"~@privileged"
"~@resources"
"~@setuid"
"~@timer"
];
SystemCallArchitectures = "native";
# This is for BindPaths= and BindReadOnlyPaths=
@ -288,8 +311,9 @@ in {
}
cfg.service
c.service
])) cfg.commands;
};
]))
cfg.commands;
};
meta.maintainers = with maintainers; [ julm lopsided98 ];
}
meta.maintainers = with maintainers; [ julm lopsided98 ];
}

View File

@ -279,7 +279,7 @@ let
src_plan = plan;
tsformat = timestampFormat;
zend_delay = toString sendDelay;
} // fold (a: b: a // b) {} (
} // foldr (a: b: a // b) {} (
map mkDestAttrs (builtins.attrValues destinations)
);

View File

@ -60,6 +60,45 @@ in {
sha256 = "02r440xcdsgi137k5lmmvp0z5w5fmk8g9mysq5pnysq1wl8sj6mw";
};
};
corefile = mkOption {
description = ''
Custom coredns corefile configuration.
See: <link xlink:href="https://coredns.io/manual/toc/#configuration"/>.
'';
type = types.str;
default = ''
.:${toString ports.dns} {
errors
health :${toString ports.health}
kubernetes ${cfg.clusterDomain} in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :${toString ports.metrics}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}'';
defaultText = ''
.:${toString ports.dns} {
errors
health :${toString ports.health}
kubernetes ''${config.services.kubernetes.addons.dns.clusterDomain} in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :${toString ports.metrics}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}'';
};
};
config = mkIf cfg.enable {
@ -151,20 +190,7 @@ in {
namespace = "kube-system";
};
data = {
Corefile = ".:${toString ports.dns} {
errors
health :${toString ports.health}
kubernetes ${cfg.clusterDomain} in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :${toString ports.metrics}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}";
Corefile = cfg.corefile;
};
};

View File

@ -189,7 +189,7 @@ in
# manually paste it in place. Just symlink.
# otherwise, create the target file, ready for users to insert the token
mkdir -p $(dirname ${certmgrAPITokenPath})
mkdir -p "$(dirname "${certmgrAPITokenPath}")"
if [ -f "${cfsslAPITokenPath}" ]; then
ln -fs "${cfsslAPITokenPath}" "${certmgrAPITokenPath}"
else

View File

@ -339,6 +339,9 @@ in
<literal>CI_SERVER_URL=&lt;CI server URL&gt;</literal>
<literal>REGISTRATION_TOKEN=&lt;registration secret&gt;</literal>
WARNING: make sure to use quoted absolute path,
or it is going to be copied to Nix Store.
'';
};
registrationFlags = mkOption {
@ -523,7 +526,10 @@ in
};
};
config = mkIf cfg.enable {
warnings = optional (cfg.configFile != null) "services.gitlab-runner.`configFile` is deprecated, please use services.gitlab-runner.`services`.";
warnings = (mapAttrsToList
(n: v: "services.gitlab-runner.services.${n}.`registrationConfigFile` points to a file in Nix Store. You should use quoted absolute path to prevent this.")
(filterAttrs (n: v: isStorePath v.registrationConfigFile) cfg.services))
++ optional (cfg.configFile != null) "services.gitlab-runner.`configFile` is deprecated, please use services.gitlab-runner.`services`.";
environment.systemPackages = [ cfg.package ];
systemd.services.gitlab-runner = {
description = "Gitlab Runner";

View File

@ -61,7 +61,7 @@ in {
port = mkOption {
default = 8080;
type = types.int;
type = types.port;
description = ''
Specifies port number on which the jenkins HTTP interface listens.
The default is 8080.

View File

@ -0,0 +1,53 @@
{ config, lib, pkgs, ... }:
with lib;
let
format = pkgs.formats.json { };
cfg = config.services.influxdb2;
configFile = format.generate "config.json" cfg.settings;
in
{
options = {
services.influxdb2 = {
enable = mkEnableOption "the influxdb2 server";
package = mkOption {
default = pkgs.influxdb2;
defaultText = "pkgs.influxdb2";
description = "influxdb2 derivation to use.";
type = types.package;
};
settings = mkOption {
default = { };
description = "configuration options for influxdb2, see https://docs.influxdata.com/influxdb/v2.0/reference/config-options for details.";
type = format.type;
};
};
};
config = mkIf cfg.enable {
assertions = [{
assertion = !(builtins.hasAttr "bolt-path" cfg.settings) && !(builtins.hasAttr "engine-path" cfg.settings);
message = "services.influxdb2.config: bolt-path and engine-path should not be set as they are managed by systemd";
}];
systemd.services.influxdb2 = {
description = "InfluxDB is an open-source, distributed, time series database";
documentation = [ "https://docs.influxdata.com/influxdb/" ];
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
INFLUXD_CONFIG_PATH = "${configFile}";
};
serviceConfig = {
ExecStart = "${cfg.package}/bin/influxd --bolt-path \${STATE_DIRECTORY}/influxd.bolt --engine-path \${STATE_DIRECTORY}/engine";
StateDirectory = "influxdb2";
DynamicUser = true;
CapabilityBoundingSet = "";
SystemCallFilter = "@system-service";
LimitNOFILE = 65536;
KillMode = "control-group";
Restart = "on-failure";
};
};
};
meta.maintainers = with lib.maintainers; [ nickcao ];
}

View File

@ -272,7 +272,7 @@ in {
}
(mkIf (cfg.bind != null) { bind = cfg.bind; })
(mkIf (cfg.unixSocket != null) { unixsocket = cfg.unixSocket; unixsocketperm = "${toString cfg.unixSocketPerm}"; })
(mkIf (cfg.slaveOf != null) { slaveof = "${cfg.slaveOf.ip} ${cfg.slaveOf.port}"; })
(mkIf (cfg.slaveOf != null) { slaveof = "${cfg.slaveOf.ip} ${toString cfg.slaveOf.port}"; })
(mkIf (cfg.masterAuth != null) { masterauth = cfg.masterAuth; })
(mkIf (cfg.requirePass != null) { requirepass = cfg.requirePass; })
];

View File

@ -53,6 +53,14 @@ let cfg = config.services.victoriametrics; in
-retentionPeriod ${toString cfg.retentionPeriod} \
${lib.escapeShellArgs cfg.extraOptions}
'';
# victoriametrics 1.59 with ~7GB of data seems to eventually panic when merging files and then
# begins restart-looping forever. Set LimitNOFILE= to a large number to work around this issue.
#
# panic: FATAL: unrecoverable error when merging small parts in the partition "/var/lib/victoriametrics/data/small/2021_08":
# cannot open source part for merging: cannot open values file in stream mode:
# cannot open file "/var/lib/victoriametrics/data/small/2021_08/[...]/values.bin":
# open /var/lib/victoriametrics/data/small/2021_08/[...]/values.bin: too many open files
LimitNOFILE = 1048576;
};
wantedBy = [ "multi-user.target" ];

View File

@ -5,8 +5,8 @@
with lib;
{
meta = {
maintainers = with maintainers; [ ];
meta = with lib; {
maintainers = with maintainers; [ ] ++ teams.pantheon.members;
};
###### interface

View File

@ -266,5 +266,7 @@ in
} // mapAttrs' appConfigToINICompatible cfg.appConfig);
};
meta.maintainers = with lib.maintainers; [ ];
meta = with lib; {
maintainers = with maintainers; [ ] ++ teams.pantheon.members;
};
}

View File

@ -27,6 +27,12 @@
"msbc-alt1-rtl"
]
},
{
"name": "BAA 100",
"no-features": [
"hw-volume"
]
},
{
"name": "JBL Endurance RUN BT",
"no-features": [
@ -190,6 +196,35 @@
"msbc-alt1"
]
},
{
"sysname": "Linux",
"release": "~^5\\.12\\.(1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17)($|[^0-9])"
},
{
"sysname": "Linux",
"release": "~^5\\.12\\.",
"no-features": [
"msbc-alt1"
]
},
{
"sysname": "Linux",
"release": "~^5\\.13\\.(1|2)($|[^0-9])"
},
{
"sysname": "Linux",
"release": "~^5\\.13\\.",
"no-features": [
"msbc-alt1"
]
},
{
"sysname": "Linux",
"release": "~^5\\.14\\.",
"no-features": [
"msbc-alt1"
]
},
{
"no-features": []
}

View File

@ -24,5 +24,15 @@
"name": "libpipewire-module-metadata"
}
],
"jack.properties": {}
"jack.properties": {},
"jack.rules": [
{
"matches": [
{}
],
"actions": {
"update-props": {}
}
}
]
}

View File

@ -59,6 +59,7 @@
"with-pulseaudio": [
"with-audio",
"bluez5",
"bluez5-autoswitch",
"logind",
"restore-stream",
"streams-follow-default"

View File

@ -18,8 +18,8 @@ in
"")
];
meta = {
maintainers = with maintainers; [ ];
meta = with lib; {
maintainers = with maintainers; [ ] ++ teams.pantheon.members;
};
###### interface

View File

@ -6,8 +6,8 @@ with lib;
{
meta = {
maintainers = with maintainers; [ ];
meta = with lib; {
maintainers = with maintainers; [ ] ++ teams.pantheon.members;
};
###### interface

View File

@ -17,7 +17,7 @@ in {
enable = mkEnableOption "Haskell documentation server";
port = mkOption {
type = types.int;
type = types.port;
default = 8080;
description = ''
Port number Hoogle will be listening to.

View File

@ -4,7 +4,10 @@ with lib;
let
pkg = pkgs.sane-backends;
pkg = pkgs.sane-backends.override {
scanSnapDriversUnfree = config.hardware.sane.drivers.scanSnap.enable;
scanSnapDriversPackage = config.hardware.sane.drivers.scanSnap.package;
};
sanedConf = pkgs.writeTextFile {
name = "saned.conf";
@ -98,6 +101,28 @@ in
'';
};
hardware.sane.drivers.scanSnap.enable = mkOption {
type = types.bool;
default = false;
example = true;
description = ''
Whether to enable drivers for the Fujitsu ScanSnap scanners.
The driver files are unfree and extracted from the Windows driver image.
'';
};
hardware.sane.drivers.scanSnap.package = mkOption {
type = types.package;
default = pkgs.sane-drivers.epjitsu;
description = ''
Epjitsu driver package to use. Useful if you want to extract the driver files yourself.
The process is described in the <literal>/etc/sane.d/epjitsu.conf</literal> file in
the <literal>sane-backends</literal> package.
'';
};
services.saned.enable = mkOption {
type = types.bool;
default = false;

View File

@ -220,7 +220,7 @@ with lib;
after = [ "network.target" ];
preStart = ''
mkdir -p /var/spool/nullmailer/{queue,tmp}
mkdir -p /var/spool/nullmailer/{queue,tmp,failed}
rm -f /var/spool/nullmailer/trigger && mkfifo -m 660 /var/spool/nullmailer/trigger
'';

View File

@ -194,7 +194,7 @@ let
# We need to handle the last column specially here, because it's
# open-ended (command + args).
lines = [ labels labelDefaults ] ++ (map (l: init l ++ [""]) masterCf);
in fold foldLine (genList (const 0) (length labels)) lines;
in foldr foldLine (genList (const 0) (length labels)) lines;
# Pad a string with spaces from the right (opposite of fixedWidthString).
pad = width: str: let
@ -203,7 +203,7 @@ let
in str + optionalString (padWidth > 0) padding;
# It's + 2 here, because that's the amount of spacing between columns.
fullWidth = fold (width: acc: acc + width + 2) 0 maxWidths;
fullWidth = foldr (width: acc: acc + width + 2) 0 maxWidths;
formatLine = line: concatStringsSep " " (zipListsWith pad maxWidths line);

View File

@ -522,20 +522,16 @@ in
(umask 027; gitea_setup)
''}
# run migrations/init the database
${gitea}/bin/gitea migrate
# update all hooks' binary paths
HOOKS=$(find ${cfg.repositoryRoot} -mindepth 4 -maxdepth 6 -type f -wholename "*git/hooks/*")
if [ "$HOOKS" ]
then
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea}/bin/gitea,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/env,${pkgs.coreutils}/bin/env,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/bash,${pkgs.bash}/bin/bash,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/perl,${pkgs.perl}/bin/perl,g' $HOOKS
fi
${gitea}/bin/gitea admin regenerate hooks
# update command option in authorized_keys
if [ -r ${cfg.stateDir}/.ssh/authorized_keys ]
then
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea}/bin/gitea,g' ${cfg.stateDir}/.ssh/authorized_keys
${gitea}/bin/gitea admin regenerate keys
fi
'';

View File

@ -78,7 +78,7 @@ in {
port = mkOption {
default = 8123;
type = types.int;
type = types.port;
description = "The port on which to listen.";
};

View File

@ -30,8 +30,7 @@ in
apiSocket = mkOption {
type = types.nullOr types.path;
default = null;
example = "/run/klipper/api";
default = "/run/klipper/api";
description = "Path of the API socket to create.";
};

View File

@ -0,0 +1,66 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.libreddit;
args = concatStringsSep " " ([
"--port ${toString cfg.port}"
"--address ${cfg.address}"
] ++ optional cfg.redirect "--redirect-https");
in
{
options = {
services.libreddit = {
enable = mkEnableOption "Private front-end for Reddit";
address = mkOption {
default = "0.0.0.0";
example = "127.0.0.1";
type = types.str;
description = "The address to listen on";
};
port = mkOption {
default = 8080;
example = 8000;
type = types.port;
description = "The port to listen on";
};
redirect = mkOption {
type = types.bool;
default = false;
description = "Enable the redirecting to HTTPS";
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = "Open ports in the firewall for the libreddit web interface";
};
};
};
config = mkIf cfg.enable {
systemd.services.libreddit = {
description = "Private front-end for Reddit";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
DynamicUser = true;
ExecStart = "${pkgs.libreddit}/bin/libreddit ${args}";
AmbientCapabilities = lib.mkIf (cfg.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
Restart = "on-failure";
RestartSec = "2s";
};
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.port ];
};
};
}

View File

@ -0,0 +1,135 @@
{ config, lib, pkgs, ... }:
with lib;
let
pkg = pkgs.moonraker;
cfg = config.services.moonraker;
format = pkgs.formats.ini {
# https://github.com/NixOS/nixpkgs/pull/121613#issuecomment-885241996
listToValue = l:
if builtins.length l == 1 then generators.mkValueStringDefault {} (head l)
else lib.concatMapStrings (s: "\n ${generators.mkValueStringDefault {} s}") l;
mkKeyValue = generators.mkKeyValueDefault {} ":";
};
in {
options = {
services.moonraker = {
enable = mkEnableOption "Moonraker, an API web server for Klipper";
klipperSocket = mkOption {
type = types.path;
default = config.services.klipper.apiSocket;
description = "Path to Klipper's API socket.";
};
stateDir = mkOption {
type = types.path;
default = "/var/lib/moonraker";
description = "The directory containing the Moonraker databases.";
};
configDir = mkOption {
type = types.path;
default = cfg.stateDir + "/config";
description = ''
The directory containing client-writable configuration files.
Clients will be able to edit files in this directory via the API. This directory must be writable.
'';
};
user = mkOption {
type = types.str;
default = "moonraker";
description = "User account under which Moonraker runs.";
};
group = mkOption {
type = types.str;
default = "moonraker";
description = "Group account under which Moonraker runs.";
};
address = mkOption {
type = types.str;
default = "127.0.0.1";
example = "0.0.0.0";
description = "The IP or host to listen on.";
};
port = mkOption {
type = types.ints.unsigned;
default = 7125;
description = "The port to listen on.";
};
settings = mkOption {
type = format.type;
default = { };
example = {
authorization = {
trusted_clients = [ "10.0.0.0/24" ];
cors_domains = [ "https://app.fluidd.xyz" ];
};
};
description = ''
Configuration for Moonraker. See the <link xlink:href="https://moonraker.readthedocs.io/en/latest/configuration/">documentation</link>
for supported values.
'';
};
};
};
config = mkIf cfg.enable {
warnings = optional (cfg.settings ? update_manager)
''Enabling update_manager is not supported on NixOS and will lead to non-removable warnings in some clients.'';
users.users = optionalAttrs (cfg.user == "moonraker") {
moonraker = {
group = cfg.group;
uid = config.ids.uids.moonraker;
};
};
users.groups = optionalAttrs (cfg.group == "moonraker") {
moonraker.gid = config.ids.gids.moonraker;
};
environment.etc."moonraker.cfg".source = let
forcedConfig = {
server = {
host = cfg.address;
port = cfg.port;
klippy_uds_address = cfg.klipperSocket;
config_path = cfg.configDir;
database_path = "${cfg.stateDir}/database";
};
};
fullConfig = recursiveUpdate cfg.settings forcedConfig;
in format.generate "moonraker.cfg" fullConfig;
systemd.tmpfiles.rules = [
"d '${cfg.stateDir}' - ${cfg.user} ${cfg.group} - -"
"d '${cfg.configDir}' - ${cfg.user} ${cfg.group} - -"
];
systemd.services.moonraker = {
description = "Moonraker, an API web server for Klipper";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]
++ optional config.services.klipper.enable "klipper.service";
# Moonraker really wants its own config to be writable...
script = ''
cp /etc/moonraker.cfg ${cfg.configDir}/moonraker-temp.cfg
chmod u+w ${cfg.configDir}/moonraker-temp.cfg
exec ${pkg}/bin/moonraker -c ${cfg.configDir}/moonraker-temp.cfg
'';
serviceConfig = {
WorkingDirectory = cfg.stateDir;
Group = cfg.group;
User = cfg.user;
};
};
};
}

View File

@ -0,0 +1,120 @@
{ config, pkgs, lib, ... }:
with lib;
let
dataDir = "/var/lib/mx-puppet-discord";
registrationFile = "${dataDir}/discord-registration.yaml";
cfg = config.services.mx-puppet-discord;
settingsFormat = pkgs.formats.json {};
settingsFile = settingsFormat.generate "mx-puppet-discord-config.json" cfg.settings;
in {
options = {
services.mx-puppet-discord = {
enable = mkEnableOption ''
mx-puppet-discord is a discord puppeting bridge for matrix.
It handles bridging private and group DMs, as well as Guilds (servers)
'';
settings = mkOption rec {
apply = recursiveUpdate default;
inherit (settingsFormat) type;
default = {
bridge.port = 8434;
presence = {
enabled = true;
interval = 500;
};
provisioning.whitelist = [ ];
relay.whitelist = [ ];
# variables are preceded by a colon.
namePatterns = {
user = ":name";
userOverride = ":displayname";
room = ":name";
group = ":name";
};
#defaults to sqlite but can be configured to use postgresql with
#connstring
database.filename = "${dataDir}/mx-puppet-discord/database.db";
logging = {
console = "info";
lineDateFormat = "MMM-D HH:mm:ss.SSS";
};
};
example = literalExample ''
{
bridge = {
bindAddress = "localhost";
domain = "example.com";
homeserverUrl = "https://example.com";
};
provisioning.whitelist = [ "@admin:example.com" ];
relay.whitelist = [ "@.*:example.com" ];
}
'';
description = ''
<filename>config.yaml</filename> configuration as a Nix attribute set.
Configuration options should match those described in
<link xlink:href="https://github.com/matrix-discord/mx-puppet-discord/blob/master/sample.config.yaml">
sample.config.yaml</link>.
'';
};
serviceDependencies = mkOption {
type = with types; listOf str;
default = optional config.services.matrix-synapse.enable "matrix-synapse.service";
description = ''
List of Systemd services to require and wait for when starting the application service.
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.mx-puppet-discord = {
description = ''
mx-puppet-discord is a discord puppeting bridge for matrix.
It handles bridging private and group DMs, as well as Guilds (servers).
'';
wantedBy = [ "multi-user.target" ];
wants = [ "network-online.target" ] ++ cfg.serviceDependencies;
after = [ "network-online.target" ] ++ cfg.serviceDependencies;
preStart = ''
# generate the appservice's registration file if absent
if [ ! -f '${registrationFile}' ]; then
${pkgs.mx-puppet-discord}/bin/mx-puppet-discord -r -c ${settingsFile} \
-f ${registrationFile}
fi
'';
serviceConfig = {
Type = "simple";
Restart = "always";
ProtectSystem = "strict";
ProtectHome = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
DynamicUser = true;
PrivateTmp = true;
WorkingDirectory = pkgs.mx-puppet-discord;
StateDirectory = baseNameOf dataDir;
UMask = 0027;
ExecStart = ''
${pkgs.mx-puppet-discord}/bin/mx-puppet-discord -c ${settingsFile}
'';
};
};
};
meta.maintainers = with maintainers; [ govanify ];
}

View File

@ -0,0 +1,351 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.nitter;
configFile = pkgs.writeText "nitter.conf" ''
${generators.toINI {
# String values need to be quoted
mkKeyValue = generators.mkKeyValueDefault {
mkValueString = v:
if isString v then "\"" + (strings.escape ["\""] (toString v)) + "\""
else generators.mkValueStringDefault {} v;
} " = ";
} (lib.recursiveUpdate {
Server = cfg.server;
Cache = cfg.cache;
Config = cfg.config // { hmacKey = "@hmac@"; };
Preferences = cfg.preferences;
} cfg.settings)}
'';
# `hmac` is a secret used for cryptographic signing of video URLs.
# Generate it on first launch, then copy configuration and replace
# `@hmac@` with this value.
# We are not using sed as it would leak the value in the command line.
preStart = pkgs.writers.writePython3 "nitter-prestart" {} ''
import os
import secrets
state_dir = os.environ.get("STATE_DIRECTORY")
if not os.path.isfile(f"{state_dir}/hmac"):
# Generate hmac on first launch
hmac = secrets.token_hex(32)
with open(f"{state_dir}/hmac", "w") as f:
f.write(hmac)
else:
# Load previously generated hmac
with open(f"{state_dir}/hmac", "r") as f:
hmac = f.read()
configFile = "${configFile}"
with open(configFile, "r") as f_in:
with open(f"{state_dir}/nitter.conf", "w") as f_out:
f_out.write(f_in.read().replace("@hmac@", hmac))
'';
in
{
options = {
services.nitter = {
enable = mkEnableOption "If enabled, start Nitter.";
server = {
address = mkOption {
type = types.str;
default = "0.0.0.0";
example = "127.0.0.1";
description = "The address to listen on.";
};
port = mkOption {
type = types.port;
default = 8080;
example = 8000;
description = "The port to listen on.";
};
https = mkOption {
type = types.bool;
default = false;
description = "Set secure attribute on cookies. Keep it disabled to enable cookies when not using HTTPS.";
};
httpMaxConnections = mkOption {
type = types.int;
default = 100;
description = "Maximum number of HTTP connections.";
};
staticDir = mkOption {
type = types.path;
default = "${pkgs.nitter}/share/nitter/public";
defaultText = "\${pkgs.nitter}/share/nitter/public";
description = "Path to the static files directory.";
};
title = mkOption {
type = types.str;
default = "nitter";
description = "Title of the instance.";
};
hostname = mkOption {
type = types.str;
default = "localhost";
example = "nitter.net";
description = "Hostname of the instance.";
};
};
cache = {
listMinutes = mkOption {
type = types.int;
default = 240;
description = "How long to cache list info (not the tweets, so keep it high).";
};
rssMinutes = mkOption {
type = types.int;
default = 10;
description = "How long to cache RSS queries.";
};
redisHost = mkOption {
type = types.str;
default = "localhost";
description = "Redis host.";
};
redisPort = mkOption {
type = types.port;
default = 6379;
description = "Redis port.";
};
redisConnections = mkOption {
type = types.int;
default = 20;
description = "Redis connection pool size.";
};
redisMaxConnections = mkOption {
type = types.int;
default = 30;
description = ''
Maximum number of connections to Redis.
New connections are opened when none are available, but if the
pool size goes above this, they are closed when released, do not
worry about this unless you receive tons of requests per second.
'';
};
};
config = {
base64Media = mkOption {
type = types.bool;
default = false;
description = "Use base64 encoding for proxied media URLs.";
};
tokenCount = mkOption {
type = types.int;
default = 10;
description = ''
Minimum amount of usable tokens.
Tokens are used to authorize API requests, but they expire after
~1 hour, and have a limit of 187 requests. The limit gets reset
every 15 minutes, and the pool is filled up so there is always at
least tokenCount usable tokens. Only increase this if you receive
major bursts all the time.
'';
};
};
preferences = {
replaceTwitter = mkOption {
type = types.str;
default = "";
example = "nitter.net";
description = "Replace Twitter links with links to this instance (blank to disable).";
};
replaceYouTube = mkOption {
type = types.str;
default = "";
example = "piped.kavin.rocks";
description = "Replace YouTube links with links to this instance (blank to disable).";
};
replaceInstagram = mkOption {
type = types.str;
default = "";
description = "Replace Instagram links with links to this instance (blank to disable).";
};
mp4Playback = mkOption {
type = types.bool;
default = true;
description = "Enable MP4 video playback.";
};
hlsPlayback = mkOption {
type = types.bool;
default = false;
description = "Enable HLS video streaming (requires JavaScript).";
};
proxyVideos = mkOption {
type = types.bool;
default = true;
description = "Proxy video streaming through the server (might be slow).";
};
muteVideos = mkOption {
type = types.bool;
default = false;
description = "Mute videos by default.";
};
autoplayGifs = mkOption {
type = types.bool;
default = true;
description = "Autoplay GIFs.";
};
theme = mkOption {
type = types.str;
default = "Nitter";
description = "Instance theme.";
};
infiniteScroll = mkOption {
type = types.bool;
default = false;
description = "Infinite scrolling (requires JavaScript, experimental!).";
};
stickyProfile = mkOption {
type = types.bool;
default = true;
description = "Make profile sidebar stick to top.";
};
bidiSupport = mkOption {
type = types.bool;
default = false;
description = "Support bidirectional text (makes clicking on tweets harder).";
};
hideTweetStats = mkOption {
type = types.bool;
default = false;
description = "Hide tweet stats (replies, retweets, likes).";
};
hideBanner = mkOption {
type = types.bool;
default = false;
description = "Hide profile banner.";
};
hidePins = mkOption {
type = types.bool;
default = false;
description = "Hide pinned tweets.";
};
hideReplies = mkOption {
type = types.bool;
default = false;
description = "Hide tweet replies.";
};
};
settings = mkOption {
type = types.attrs;
default = {};
description = ''
Add settings here to override NixOS module generated settings.
Check the official repository for the available settings:
https://github.com/zedeus/nitter/blob/master/nitter.conf
'';
};
redisCreateLocally = mkOption {
type = types.bool;
default = true;
description = "Configure local Redis server for Nitter.";
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = "Open ports in the firewall for Nitter web interface.";
};
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = !cfg.redisCreateLocally || (cfg.cache.redisHost == "localhost" && cfg.cache.redisPort == 6379);
message = "When services.nitter.redisCreateLocally is enabled, you need to use localhost:6379 as a cache server.";
}
];
systemd.services.nitter = {
description = "Nitter (An alternative Twitter front-end)";
wantedBy = [ "multi-user.target" ];
after = [ "syslog.target" "network.target" ];
serviceConfig = {
DynamicUser = true;
StateDirectory = "nitter";
Environment = [ "NITTER_CONF_FILE=/var/lib/nitter/nitter.conf" ];
# Some parts of Nitter expect `public` folder in working directory,
# see https://github.com/zedeus/nitter/issues/414
WorkingDirectory = "${pkgs.nitter}/share/nitter";
ExecStart = "${pkgs.nitter}/bin/nitter";
ExecStartPre = "${preStart}";
AmbientCapabilities = lib.mkIf (cfg.server.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
Restart = "on-failure";
RestartSec = "5s";
# Hardening
CapabilityBoundingSet = if (cfg.server.port < 1024) then [ "CAP_NET_BIND_SERVICE" ] else [ "" ];
DeviceAllow = [ "" ];
LockPersonality = true;
MemoryDenyWriteExecute = true;
PrivateDevices = true;
# A private user cannot have process capabilities on the host's user
# namespace and thus CAP_NET_BIND_SERVICE has no effect.
PrivateUsers = (cfg.server.port >= 1024);
ProcSubset = "pid";
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectProc = "invisible";
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter = [ "@system-service" "~@privileged" "~@resources" ];
UMask = "0077";
};
};
services.redis = lib.mkIf (cfg.redisCreateLocally) {
enable = true;
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.server.port ];
};
};
}

View File

@ -458,7 +458,7 @@ in
description = "The flake reference to which <option>from></option> is to be rewritten.";
};
flake = mkOption {
type = types.unspecified;
type = types.nullOr types.attrs;
default = null;
example = literalExample "nixpkgs";
description = ''

View File

@ -3,178 +3,110 @@
with lib;
let
cfg = config.services.uhub;
uhubPkg = pkgs.uhub.override { tlsSupport = cfg.enableTLS; };
pluginConfig = ""
+ optionalString cfg.plugins.authSqlite.enable ''
plugin ${uhubPkg.mod_auth_sqlite}/mod_auth_sqlite.so "file=${cfg.plugins.authSqlite.file}"
''
+ optionalString cfg.plugins.logging.enable ''
plugin ${uhubPkg.mod_logging}/mod_logging.so ${if cfg.plugins.logging.syslog then "syslog=true" else "file=${cfg.plugins.logging.file}"}
''
+ optionalString cfg.plugins.welcome.enable ''
plugin ${uhubPkg.mod_welcome}/mod_welcome.so "motd=${pkgs.writeText "motd.txt" cfg.plugins.welcome.motd} rules=${pkgs.writeText "rules.txt" cfg.plugins.welcome.rules}"
''
+ optionalString cfg.plugins.history.enable ''
plugin ${uhubPkg.mod_chat_history}/mod_chat_history.so "history_max=${toString cfg.plugins.history.max} history_default=${toString cfg.plugins.history.default} history_connect=${toString cfg.plugins.history.connect}"
'';
uhubConfigFile = pkgs.writeText "uhub.conf" ''
file_acl=${pkgs.writeText "users.conf" cfg.aclConfig}
file_plugins=${pkgs.writeText "plugins.conf" pluginConfig}
server_bind_addr=${cfg.address}
server_port=${toString cfg.port}
${lib.optionalString cfg.enableTLS "tls_enable=yes"}
${cfg.hubConfig}
'';
in
{
settingsFormat = {
type = with lib.types; attrsOf (oneOf [ bool int str ]);
generate = name: attrs:
pkgs.writeText name (lib.strings.concatStringsSep "\n"
(lib.attrsets.mapAttrsToList
(key: value: "${key}=${builtins.toJSON value}") attrs));
};
in {
options = {
services.uhub = {
services.uhub = mkOption {
default = { };
description = "Uhub ADC hub instances";
type = types.attrsOf (types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the uhub ADC hub.";
};
enable = mkEnableOption "hub instance" // { default = true; };
port = mkOption {
type = types.int;
default = 1511;
description = "TCP port to bind the hub to.";
};
address = mkOption {
type = types.str;
default = "any";
description = "Address to bind the hub to.";
};
enableTLS = mkOption {
type = types.bool;
default = false;
description = "Whether to enable TLS support.";
};
hubConfig = mkOption {
type = types.lines;
default = "";
description = "Contents of uhub configuration file.";
};
aclConfig = mkOption {
type = types.lines;
default = "";
description = "Contents of user ACL configuration file.";
};
plugins = {
authSqlite = {
enable = mkOption {
enableTLS = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the Sqlite authentication database plugin";
description = "Whether to enable TLS support.";
};
file = mkOption {
type = types.path;
example = "/var/db/uhub-users";
description = "Path to user database. Use the uhub-passwd utility to create the database and add/remove users.";
};
};
logging = {
enable = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the logging plugin.";
};
file = mkOption {
type = types.str;
default = "";
description = "Path of log file.";
};
syslog = mkOption {
type = types.bool;
default = false;
description = "If true then the system log is used instead of writing to file.";
};
};
welcome = {
enable = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the welcome plugin.";
};
motd = mkOption {
default = "";
type = types.lines;
settings = mkOption {
inherit (settingsFormat) type;
description = ''
Welcome message displayed to clients after connecting
and with the <literal>!motd</literal> command.
Configuration of uhub.
See https://www.uhub.org/doc/config.php for a list of options.
'';
default = { };
example = {
server_bind_addr = "any";
server_port = 1511;
hub_name = "My Public Hub";
hub_description = "Yet another ADC hub";
max_users = 150;
};
};
rules = mkOption {
default = "";
type = types.lines;
description = ''
Rules message, displayed to clients with the <literal>!rules</literal> command.
'';
};
};
history = {
enable = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the history plugin.";
plugins = mkOption {
description = "Uhub plugin configuration.";
type = with types;
listOf (submodule {
options = {
plugin = mkOption {
type = path;
example = literalExample
"$${pkgs.uhub}/plugins/mod_auth_sqlite.so";
description = "Path to plugin file.";
};
settings = mkOption {
description = "Settings specific to this plugin.";
type = with types; attrsOf str;
example = { file = "/etc/uhub/users.db"; };
};
};
});
default = [ ];
};
max = mkOption {
type = types.int;
default = 200;
description = "The maximum number of messages to keep in history";
};
default = mkOption {
type = types.int;
default = 10;
description = "When !history is provided without arguments, then this default number of messages are returned.";
};
connect = mkOption {
type = types.int;
default = 5;
description = "The number of chat history messages to send when users connect (0 = do not send any history).";
};
};
};
};
});
};
};
config = mkIf cfg.enable {
config = let
hubs = lib.attrsets.filterAttrs (_: cfg: cfg.enable) config.services.uhub;
in {
users = {
users.uhub.uid = config.ids.uids.uhub;
groups.uhub.gid = config.ids.gids.uhub;
};
environment.etc = lib.attrsets.mapAttrs' (name: cfg:
let
settings' = cfg.settings // {
tls_enable = cfg.enableTLS;
file_plugins = pkgs.writeText "uhub-plugins.conf"
(lib.strings.concatStringsSep "\n" (map ({ plugin, settings }:
"plugin ${plugin} ${
toString
(lib.attrsets.mapAttrsToList (key: value: ''"${key}=${value}"'')
settings)
}") cfg.plugins));
};
in {
name = "uhub/${name}.conf";
value.source = settingsFormat.generate "uhub-${name}.conf" settings';
}) hubs;
systemd.services.uhub = {
description = "high performance peer-to-peer hub for the ADC network";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "notify";
ExecStart = "${uhubPkg}/bin/uhub -c ${uhubConfigFile} -u uhub -g uhub -L";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
systemd.services = lib.attrsets.mapAttrs' (name: cfg: {
name = "uhub-${name}";
value = let pkg = pkgs.uhub.override { tlsSupport = cfg.enableTLS; };
in {
description = "high performance peer-to-peer hub for the ADC network";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
reloadIfChanged = true;
serviceConfig = {
Type = "notify";
ExecStart = "${pkg}/bin/uhub -c /etc/uhub/${name}.conf -L";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
DynamicUser = true;
};
};
};
}) hubs;
};
}

View File

@ -102,8 +102,8 @@ in
plugins = mkOption {
type = types.listOf types.package;
default = with pkgs; [ nagiosPluginsOfficial ssmtp mailutils ];
defaultText = "[pkgs.nagiosPluginsOfficial pkgs.ssmtp pkgs.mailutils]";
default = with pkgs; [ monitoring-plugins ssmtp mailutils ];
defaultText = "[pkgs.monitoring-plugins pkgs.ssmtp pkgs.mailutils]";
description = "
Packages to be added to the Nagios <envar>PATH</envar>.
Typically used to add plugins, but can be anything.

View File

@ -33,6 +33,7 @@ let
"domain"
"dovecot"
"fritzbox"
"influxdb"
"json"
"jitsi"
"kea"

View File

@ -0,0 +1,34 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.influxdb;
in
{
port = 9122;
extraOpts = {
sampleExpiry = mkOption {
type = types.str;
default = "5m";
example = "10m";
description = "How long a sample is valid for";
};
udpBindAddress = mkOption {
type = types.str;
default = ":9122";
example = "192.0.2.1:9122";
description = "Address on which to listen for udp packets";
};
};
serviceOpts = {
serviceConfig = {
RuntimeDirectory = "prometheus-influxdb-exporter";
ExecStart = ''
${pkgs.prometheus-influxdb-exporter}/bin/influxdb_exporter \
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
--influxdb.sample-expiry ${cfg.sampleExpiry} ${concatStringsSep " " cfg.extraFlags}
'';
};
};
}

View File

@ -0,0 +1,100 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.litestream;
settingsFormat = pkgs.formats.yaml {};
in
{
options.services.litestream = {
enable = mkEnableOption "litestream";
package = mkOption {
description = "Package to use.";
default = pkgs.litestream;
defaultText = "pkgs.litestream";
type = types.package;
};
settings = mkOption {
description = ''
See the <link xlink:href="https://litestream.io/reference/config/">documentation</link>.
'';
type = settingsFormat.type;
example = {
dbs = [
{
path = "/var/lib/db1";
replicas = [
{
url = "s3://mybkt.litestream.io/db1";
}
];
}
];
};
};
environmentFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/run/secrets/litestream";
description = ''
Environment file as defined in <citerefentry>
<refentrytitle>systemd.exec</refentrytitle><manvolnum>5</manvolnum>
</citerefentry>.
Secrets may be passed to the service without adding them to the
world-readable Nix store, by specifying placeholder variables as
the option value in Nix and setting these variables accordingly in the
environment file.
By default, Litestream will perform environment variable expansion
within the config file before reading it. Any references to ''$VAR or
''${VAR} formatted variables will be replaced with their environment
variable values. If no value is set then it will be replaced with an
empty string.
<programlisting>
# Content of the environment file
LITESTREAM_ACCESS_KEY_ID=AKIAxxxxxxxxxxxxxxxx
LITESTREAM_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx
</programlisting>
Note that this file needs to be available on the host on which
this exporter is running.
'';
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc = {
"litestream.yml" = {
source = settingsFormat.generate "litestream-config.yaml" cfg.settings;
};
};
systemd.services.litestream = {
description = "Litestream";
wantedBy = [ "multi-user.target" ];
after = [ "networking.target" ];
serviceConfig = {
EnvironmentFile = mkIf (cfg.environmentFile != null) cfg.environmentFile;
ExecStart = "${cfg.package}/bin/litestream replicate";
Restart = "always";
User = "litestream";
Group = "litestream";
};
};
users.users.litestream = {
description = "Litestream user";
group = "litestream";
isSystemUser = true;
};
users.groups.litestream = {};
};
meta.doc = ./litestream.xml;
}

View File

@ -0,0 +1,65 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-litestream">
<title>Litestream</title>
<para>
<link xlink:href="https://litestream.io/">Litestream</link> is a standalone streaming
replication tool for SQLite.
</para>
<section xml:id="module-services-litestream-configuration">
<title>Configuration</title>
<para>
Litestream service is managed by a dedicated user named <literal>litestream</literal>
which needs permission to the database file. Here's an example config which gives
required permissions to access <link linkend="opt-services.grafana.database.path">
grafana database</link>:
<programlisting>
{ pkgs, ... }:
{
users.users.litestream.extraGroups = [ "grafana" ];
systemd.services.grafana.serviceConfig.ExecStartPost = "+" + pkgs.writeShellScript "grant-grafana-permissions" ''
timeout=10
while [ ! -f /var/lib/grafana/data/grafana.db ];
do
if [ "$timeout" == 0 ]; then
echo "ERROR: Timeout while waiting for /var/lib/grafana/data/grafana.db."
exit 1
fi
sleep 1
((timeout--))
done
find /var/lib/grafana -type d -exec chmod -v 775 {} \;
find /var/lib/grafana -type f -exec chmod -v 660 {} \;
'';
services.litestream = {
enable = true;
environmentFile = "/run/secrets/litestream";
settings = {
dbs = [
{
path = "/var/lib/grafana/data/grafana.db";
replicas = [{
url = "s3://mybkt.litestream.io/grafana";
}];
}
];
};
};
}
</programlisting>
</para>
</section>
</chapter>

View File

@ -79,7 +79,7 @@ in
systemd.services =
lib.fold ( s : acc : acc //
lib.foldr ( s : acc : acc //
{
"autossh-${s.name}" =
let

Some files were not shown because too many files have changed in this diff Show More