Merge remote-tracking branch 'upstream/master' into work-on-multi-shellFor

This commit is contained in:
John Ericson 2020-02-22 17:34:08 -05:00
commit 196682b175
10073 changed files with 330797 additions and 225838 deletions

53
.github/CODEOWNERS vendored
View File

@ -11,10 +11,12 @@
/.github/CODEOWNERS @edolstra
# Libraries
/lib @edolstra @nbp
/lib @edolstra @nbp @infinisil
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/cli.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
/lib/asserts.nix @edolstra @nbp @Profpatsch
# Nixpkgs Internals
/default.nix @nbp
@ -29,10 +31,13 @@
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
# NixOS Internals
/nixos/default.nix @nbp
/nixos/lib/from-env.nix @nbp
/nixos/lib/eval-config.nix @nbp
/nixos/default.nix @nbp @infinisil
/nixos/lib/from-env.nix @nbp @infinisil
/nixos/lib/eval-config.nix @nbp @infinisil
/nixos/doc/manual/configuration/abstractions.xml @nbp
/nixos/doc/manual/configuration/config-file.xml @nbp
/nixos/doc/manual/configuration/config-syntax.xml @nbp
@ -47,22 +52,25 @@
/nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp
# NixOS integration test driver
/nixos/lib/test-driver @tfc
# New NixOS modules
/nixos/modules/module-list.nix @Infinisil
# Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
# Haskell
/pkgs/development/compilers/ghc @basvandijk
/pkgs/development/haskell-modules @basvandijk
/pkgs/development/haskell-modules/default.nix @basvandijk
/pkgs/development/haskell-modules/generic-builder.nix @basvandijk
/pkgs/development/haskell-modules/hoogle.nix @basvandijk
/pkgs/development/compilers/ghc @cdepillabout
/pkgs/development/haskell-modules @cdepillabout @infinisil
/pkgs/development/haskell-modules/default.nix @cdepillabout
/pkgs/development/haskell-modules/generic-builder.nix @cdepillabout
/pkgs/development/haskell-modules/hoogle.nix @cdepillabout
# Perl
/pkgs/development/interpreters/perl @volth
@ -79,6 +87,7 @@
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7
/pkgs/build-support/rust @andir
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
@ -130,6 +139,12 @@
/nixos/tests/hardened.nix @joachifm
/pkgs/os-specific/linux/kernel/hardened-config.nix @joachifm
# Network Time Daemons
/pkgs/tools/networking/chrony @thoughtpolice
/pkgs/tools/networking/ntp @thoughtpolice
/pkgs/tools/networking/openntpd @thoughtpolice
/nixos/modules/services/networking/ntp @thoughtpolice
# Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch
@ -150,3 +165,19 @@
/pkgs/applications/editors/emacs-modes @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# VimPlugins
/pkgs/misc/vim-plugins @jonringer @softinio
# VsCode Extensions
/pkgs/misc/vscode-extensions @jonringer
# Prometheus exporter modules and tests
/nixos/modules/services/monitoring/prometheus/exporters.nix @WilliButz
/nixos/modules/services/monitoring/prometheus/exporters.xml @WilliButz
/nixos/tests/prometheus-exporters.nix @WilliButz
# PHP
/pkgs/development/interpreters/php @etu
/pkgs/top-level/php-packages.nix @etu
/pkgs/build-support/build-pecl.nix @etu

View File

@ -6,9 +6,8 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* [Submit an issue](https://github.com/NixOS/nixpkgs/issues) - assuming one does not already exist.
* Clearly describe the issue including steps to reproduce when it is a bug.
* Include information what version of nixpkgs and Nix are you using (nixos-version or git revision).
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
@ -49,6 +48,15 @@ In addition to writing properly formatted commit messages, it's important to inc
For package version upgrades and such a one-line commit message is usually sufficient.
## Backporting changes
To [backport a change into a release branch](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches):
1. Take note of the commit in which the change was introduced into `master`.
2. Check out the target _release branch_, e.g. `release-19.09`. Do not use a _channel branch_ like `nixos-19.09` or `nixpkgs-19.09`.
3. Use `git cherry-pick -x <original commit>`.
4. Open your backport PR. Make sure to select the release branch (e.g. `release-19.09`) as the target branch of the PR, and link to the PR in which the original change was made to `master`.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#sec-reviewing-contributions).
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

View File

@ -8,4 +8,4 @@
## Technical details
Please run `nix run nixpkgs.nix-info -c nix-info -m` and paste the result.
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.

View File

@ -26,7 +26,7 @@ If applicable, add screenshots to help explain your problem.
Add any other context about the problem here.
**Metadata**
Please run `nix run nixpkgs.nix-info -c nix-info -m` and paste the result.
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
Maintainer information:
```yaml

View File

@ -1,4 +1,4 @@
<!-- Nixpkgs has a lot of new incoming Pull Requests, but not enough people to review this constant stream. Even if you aren't a committer, we would appreciate reviews of other PRs, especially simple ones like package updates. Just testing the relevant package/service and leaving a comment saying what you tested, how you tested it and whether it worked would be great. List of open PRs: <https://github.com/NixOS/nixpkgs/pulls>, for more about reviewing contributions: <https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#sec-reviewing-contributions>. Reviewing isn't mandatory, but it would help out a lot and reduce the average time-to-merge for all of us. Thanks a lot if you do! -->
<!-- Nixpkgs has a lot of new incoming Pull Requests, but not enough people to review this constant stream. Even if you aren't a committer, we would appreciate reviews of other PRs, especially simple ones like package updates. Just testing the relevant package/service and leaving a comment saying what you tested, how you tested it and whether it worked would be great. List of open PRs: <https://github.com/NixOS/nixpkgs/pulls>, for more about reviewing contributions: <https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions>. Reviewing isn't mandatory, but it would help out a lot and reduce the average time-to-merge for all of us. Thanks a lot if you do! -->
###### Motivation for this change
@ -6,18 +6,14 @@
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS)
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
- Built on platform(s)
- [ ] NixOS
- [ ] macOS
- [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nix-review --run "nix-review wip"`
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after)
- [ ] Ensured that relevant documentation is up to date
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
###### Notify maintainers
cc @

32
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,32 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 180
# Number of days of inactivity before a stale issue is closed
daysUntilClose: false
# Issues with these labels will never be considered stale
exemptLabels:
- 1.severity: security
# Label to use when marking an issue as stale
staleLabel: 2.status: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
Thank you for your contributions.
This has been automatically marked as stale because it has had no
activity for 180 days.
If this is still important to you, we ask that you leave a
comment below. Your comment can be as simple as "still important
to me". This lets people see that at least one person still cares
about this. Someone will have to do this at most twice a year if
there is no other activity.
Here are suggestions that might help resolve this more quickly:
1. Search for maintainers and people that previously touched the
related code and @ mention them in a comment.
2. Ask on the [NixOS Discourse](https://discourse.nixos.org/).
3. Ask on the [#nixos channel](irc://irc.freenode.net/#nixos) on
[irc.freenode.net](https://freenode.net).
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -1 +1 @@
19.09
20.09

View File

@ -1,4 +1,4 @@
Copyright (c) 2003-2019 Eelco Dolstra and the Nixpkgs/NixOS contributors
Copyright (c) 2003-2020 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the

View File

@ -16,7 +16,7 @@
* [NixOS Manual](https://nixos.org/nixos/manual) - how to install, configure, and maintain a purely-functional Linux distribution
* [Nixpkgs Manual](https://nixos.org/nixpkgs/manual/) - contributing to Nixpkgs and using programming-language-specific Nix expressions
* [Nix Package Manager Manual](https://nixos.org/nix/manual) - how to write Nix expresssions (programs), and how to use Nix command line tools
* [Nix Package Manager Manual](https://nixos.org/nix/manual) - how to write Nix expressions (programs), and how to use Nix command line tools
# Community
@ -24,10 +24,11 @@
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)
* [NixOS Weekly](https://weekly.nixos.org/)
* [Community-maintained wiki](https://nixos.wiki/)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Matrix, Telegram, other IRC channels, etc.)
# Other Project Repositories
The sources of all offical Nix-related projects are in the [NixOS
The sources of all official Nix-related projects are in the [NixOS
organization on GitHub](https://github.com/NixOS/). Here are some of
the main ones:
@ -44,16 +45,14 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/).
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 19.03 release](https://hydra.nixos.org/jobset/nixos/release-19.03)
* [Continuous package builds for the NixOS 19.09 release](https://hydra.nixos.org/jobset/nixos/release-19.09)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 19.03 release](https://hydra.nixos.org/job/nixos/release-19.03/tested#tabs-constituents)
* [Tests for the NixOS 19.09 release](https://hydra.nixos.org/job/nixos/release-19.09/tested#tabs-constituents)
Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
met, the Nixpkgs expressions are distributed via [Nix
channels](https://nixos.org/nix/manual/#sec-channels). The channels
are provided via a read-only mirror of the Nixpkgs repository called
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels).
channels](https://nixos.org/nix/manual/#sec-channels).
# Contributing

150
doc/builders/fetchers.xml Normal file
View File

@ -0,0 +1,150 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-pkgs-fetchers">
<title>Fetchers</title>
<para>
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
</para>
<para>
The two fetcher primitives are <function>fetchurl</function> and <function>fetchzip</function>. Both of these have two required arguments, a URL and a hash. The hash is typically <literal>sha256</literal>, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use <literal>sha256</literal>. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
</para>
<programlisting><![CDATA[
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "hello";
src = fetchurl {
url = "http://www.example.org/hello.tar.gz";
sha256 = "1111111111111111111111111111111111111111111111111111";
};
}
]]></programlisting>
<para>
The main difference between <function>fetchurl</function> and <function>fetchzip</function> is in how they store the contents. <function>fetchurl</function> will store the unaltered contents of the URL within the Nix store. <function>fetchzip</function> on the other hand will decompress the archive for you, making files and directories directly accessible in the future. <function>fetchzip</function> can only be used with archives. Despite the name, <function>fetchzip</function> is not limited to .zip files and can also be used with any tarball.
</para>
<para>
<function>fetchpatch</function> works very similarly to <function>fetchurl</function> with the same arguments expected. It expects patch files as a source and and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
</para>
<para>
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward names based on the name of the command used with the VCS system. Because they give you a working repository, they act most like <function>fetchzip</function>.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchsvn</literal>
</term>
<listitem>
<para>
Used with Subversion. Expects <literal>url</literal> to a Subversion directory, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchgit</literal>
</term>
<listitem>
<para>
Used with Git. Expects <literal>url</literal> to a Git repo, <literal>rev</literal>, and <literal>sha256</literal>. <literal>rev</literal> in this case can be full the git commit id (SHA1 hash) or a tag name like <literal>refs/tags/v1.0</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchfossil</literal>
</term>
<listitem>
<para>
Used with Fossil. Expects <literal>url</literal> to a Fossil archive, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchcvs</literal>
</term>
<listitem>
<para>
Used with CVS. Expects <literal>cvsRoot</literal>, <literal>tag</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchhg</literal>
</term>
<listitem>
<para>
Used with Mercurial. Expects <literal>url</literal>, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
A number of fetcher functions wrap part of <function>fetchurl</function> and <function>fetchzip</function>. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchFromGitHub</literal>
</term>
<listitem>
<para>
<function>fetchFromGitHub</function> expects four arguments. <literal>owner</literal> is a string corresponding to the GitHub user or organization that controls this repository. <literal>repo</literal> corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as <literal>owner</literal>/<literal>repo</literal>. <literal>rev</literal> corresponds to the Git commit hash or tag (e.g <literal>v1.0</literal>) that will be downloaded from Git. Finally, <literal>sha256</literal> corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but <literal>sha256</literal> is currently preferred.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitLab</literal>
</term>
<listitem>
<para>
This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitiles</literal>
</term>
<listitem>
<para>
This is used with Gitiles repositories. The arguments expected
are similar to fetchgit.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromBitbucket</literal>
</term>
<listitem>
<para>
This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromSavannah</literal>
</term>
<listitem>
<para>
This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromRepoOrCz</literal>
</term>
<listitem>
<para>
This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
</variablelist>
</chapter>

12
doc/builders/images.xml Normal file
View File

@ -0,0 +1,12 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-images">
<title>Images</title>
<para>
This chapter describes tools for creating various types of images.
</para>
<xi:include href="images/appimagetools.xml" />
<xi:include href="images/dockertools.xml" />
<xi:include href="images/ocitools.xml" />
<xi:include href="images/snaptools.xml" />
</chapter>

View File

@ -0,0 +1,102 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-appimageTools">
<title>pkgs.appimageTools</title>
<para>
<varname>pkgs.appimageTools</varname> is a set of functions for extracting and wrapping <link xlink:href="https://appimage.org/">AppImage</link> files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, <literal>pkgs.appimage-run</literal> can be used as well.
</para>
<warning>
<para>
The <varname>appimageTools</varname> API is unstable and may be subject to backwards-incompatible changes in the future.
</para>
</warning>
<section xml:id="ssec-pkgs-appimageTools-formats">
<title>AppImage formats</title>
<para>
There are different formats for AppImages, see <link xlink:href="https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format">the specification</link> for details.
</para>
<itemizedlist>
<listitem>
<para>
Type 1 images are ISO 9660 files that are also ELF executables.
</para>
</listitem>
<listitem>
<para>
Type 2 images are ELF executables with an appended filesystem.
</para>
</listitem>
</itemizedlist>
<para>
They can be told apart with <command>file -k</command>:
</para>
<screen>
<prompt>$ </prompt>file -k type1.AppImage
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
<prompt>$ </prompt>file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
</screen>
<para>
Note how the type 1 AppImage is described as an <literal>ISO 9660 CD-ROM filesystem</literal>, and the type 2 AppImage is not.
</para>
</section>
<section xml:id="ssec-pkgs-appimageTools-wrapping">
<title>Wrapping</title>
<para>
Depending on the type of AppImage you're wrapping, you'll have to use <varname>wrapType1</varname> or <varname>wrapType2</varname>.
</para>
<programlisting>
appimageTools.wrapType2 { # or wrapType1
name = "patchwork"; <co xml:id='ex-appimageTools-wrapping-1' />
src = fetchurl { <co xml:id='ex-appimageTools-wrapping-2' />
url = https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage;
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ]; <co xml:id='ex-appimageTools-wrapping-3' />
}</programlisting>
<calloutlist>
<callout arearefs='ex-appimageTools-wrapping-1'>
<para>
<varname>name</varname> specifies the name of the resulting image.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>src</varname> specifies the AppImage file to extract.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-3'>
<para>
<varname>extraPkgs</varname> allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:
<itemizedlist>
<listitem>
<para>
Looking through the extracted AppImage files, reading its scripts and running <command>patchelf</command> and <command>ldd</command> on its executables. This can also be done in <command>appimage-run</command>, by setting <command>APPIMAGE_DEBUG_EXEC=bash</command>.
</para>
</listitem>
<listitem>
<para>
Running <command>strace -vfefile</command> on the wrapped executable, looking for libraries that can't be found.
</para>
</listitem>
</itemizedlist>
</para>
</callout>
</calloutlist>
</section>
</section>

View File

@ -0,0 +1,478 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and manipulating Docker images according to the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120"> Docker Image Specification v1.2.0 </link>. Docker itself is not used to perform any of the operations done by these functions.
</para>
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<title>buildImage</title>
<para>
This function is analogous to the <command>docker build</command> command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with <command>docker load</command>.
</para>
<para>
The parameters of <varname>buildImage</varname> with relative example values are described below:
</para>
<example xml:id='ex-dockerTools-buildImage'>
<title>Docker build</title>
<programlisting>
buildImage {
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' />
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' />
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' />
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = { <co xml:id='ex-dockerTools-buildImage-8' />
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
</programlisting>
</example>
<para>
The above example will build a Docker image <literal>redis/latest</literal> from the given base image. Loading and running this image in Docker results in <literal>redis-server</literal> being started automatically.
</para>
<calloutlist>
<callout arearefs='ex-dockerTools-buildImage-1'>
<para>
<varname>name</varname> specifies the name of the resulting image. This is the only required argument for <varname>buildImage</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-2'>
<para>
<varname>tag</varname> specifies the tag of the resulting image. By default it's <literal>null</literal>, which indicates that the nix output hash will be used as tag.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-3'>
<para>
<varname>fromImage</varname> is the repository tarball containing the base image. It must be a valid Docker image, such as exported by <command>docker save</command>. By default it's <literal>null</literal>, which can be seen as equivalent to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-4'>
<para>
<varname>fromImageName</varname> can be used to further specify the base image within the repository, in case it contains multiple images. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first image available in the repository.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-5'>
<para>
<varname>fromImageTag</varname> can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first tag available for the base image.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-6'>
<para>
<varname>contents</varname> is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as <command>ADD contents/ /</command> in a <filename>Dockerfile</filename>. By default it's <literal>null</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
<para>
<varname>runAsRoot</varname> is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied <varname>contents</varname> derivation. This can be similarly seen as <command>RUN ...</command> in a <filename>Dockerfile</filename>.
<note>
<para>
Using this parameter requires the <literal>kvm</literal> device to be available.
</para>
</note>
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-8'>
<para>
<varname>config</varname> is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
</callout>
</calloutlist>
<para>
After the new layer has been created, its closure (to which <varname>contents</varname>, <varname>config</varname> and <varname>runAsRoot</varname> contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
</para>
<para>
At the end of the process, only one new single layer will be produced and added to the resulting image.
</para>
<para>
The resulting repository will only list the single image <varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/> it would be <varname>redis/latest</varname>.
</para>
<para>
It is possible to inspect the arguments with which an image was built using its <varname>buildArgs</varname> attribute.
</para>
<note>
<para>
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal> you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
</para>
</note>
<note>
<para>
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal> you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>.
</para>
</note>
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date">
<title>Impurely Defining a Docker Layer's Creation Date</title>
<para>
By default <function>buildImage</function> will use a static date of one second past the UNIX Epoch. This allows <function>buildImage</function> to produce binary reproducible images. When listing images with <command>docker images</command>, the newly created images will be listed like this:
</para>
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
]]></screen>
<para>
You can break binary reproducibility but have a sorted, meaningful <literal>CREATED</literal> column by setting <literal>created</literal> to <literal>now</literal>.
</para>
<programlisting><![CDATA[
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
]]></programlisting>
<para>
and now the Docker CLI will display a reasonable date and sort the images as expected:
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
]]></screen>
however, the produced images will not be binary reproducible.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage">
<title>buildLayeredImage</title>
<para>
Create a Docker image with many of the store paths being on their own layer to improve sharing between images.
</para>
<variablelist>
<varlistentry>
<term>
<varname>name</varname>
</term>
<listitem>
<para>
The name of the resulting image.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>tag</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Tag of the generated image.
</para>
<para>
<emphasis>Default:</emphasis> the output path's hash
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>contents</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Top level paths in the container. Either a single derivation, or a list of derivations.
</para>
<para>
<emphasis>Default:</emphasis> <literal>[]</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>config</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Run-time configuration of the container. A full list of the options are available at in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>{}</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>created</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Date and time the layers were created. Follows the same <literal>now</literal> exception supported by <literal>buildImage</literal>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>maxLayers</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Maximum number of layers to create.
</para>
<para>
<emphasis>Default:</emphasis> <literal>100</literal>
</para>
<para>
<emphasis>Maximum:</emphasis> <literal>125</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>extraCommands</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="dockerTools-buildLayeredImage-arg-contents">
<title>Behavior of <varname>contents</varname> in the final image</title>
<para>
Each path directly listed in <varname>contents</varname> will have a symlink in the root of the image.
</para>
<para>
For example:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
]]></programlisting>
will create symlinks for all the paths in the <literal>hello</literal> package:
<screen><![CDATA[
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
]]></screen>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-config">
<title>Automatic inclusion of <varname>config</varname> references</title>
<para>
The closure of <varname>config</varname> is automatically included in the closure of the final image.
</para>
<para>
This allows you to make very simple Docker images with very little code. This container will start up and run <command>hello</command>:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
]]></programlisting>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers">
<title>Adjusting <varname>maxLayers</varname></title>
<para>
Increasing the <varname>maxLayers</varname> increases the number of layers which have a chance to be shared between different images.
</para>
<para>
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
</para>
<para>
If the produced image will not be extended by other Docker builds, it is safe to set <varname>maxLayers</varname> to <literal>128</literal>. However it will be impossible to extend the image further.
</para>
<para>
The first (<literal>maxLayers-2</literal>) most "popular" paths will have their own individual layers, then layer #<literal>maxLayers-1</literal> will contain all the remaining "unpopular" paths, and finally layer #<literal>maxLayers</literal> will contain the Image configuration.
</para>
<para>
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
</para>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>
<para>
This function is analogous to the <command>docker pull</command> command, in that it can be used to pull a Docker image from a Docker registry. By default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull images.
</para>
<para>
Its parameters are described in the example below:
</para>
<example xml:id='ex-dockerTools-pullImage'>
<title>Docker pull</title>
<programlisting>
pullImage {
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
}
</programlisting>
</example>
<calloutlist>
<callout arearefs='ex-dockerTools-pullImage-1'>
<para>
<varname>imageName</varname> specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. <literal>nixos</literal>). This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageDigest</varname> specifies the digest of the image to be downloaded. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>finalImageName</varname>, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to <varname>imageName</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para>
<varname>finalImageTag</varname>, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's <literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-5'>
<para>
<varname>sha256</varname> is the checksum of the whole fetched image. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-6'>
<para>
<varname>os</varname>, if specified, is the operating system of the fetched image. By default it's <literal>linux</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-7'>
<para>
<varname>arch</varname>, if specified, is the cpu architecture of the fetched image. By default it's <literal>x86_64</literal>.
</para>
</callout>
</calloutlist>
<para>
<literal>nix-prefetch-docker</literal> command can be used to get required image parameters:
<screen>
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
</screen>
Since a given <varname>imageName</varname> may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the <option>--os</option> and <option>--arch</option> arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
</screen>
Desired image name and tag can be set using <option>--final-image-name</option> and <option>--final-image-tag</option> arguments:
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
</screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<title>exportImage</title>
<para>
This function is analogous to the <command>docker export</command> command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with <command>docker import</command>.
</para>
<note>
<para>
Using this function requires the <literal>kvm</literal> device to be available.
</para>
</note>
<para>
The parameters of <varname>exportImage</varname> are the following:
</para>
<example xml:id='ex-dockerTools-exportImage'>
<title>Docker export</title>
<programlisting>
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
</programlisting>
</example>
<para>
The parameters relative to the base image have the same synopsis as described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that <varname>fromImage</varname> is the only required argument in this case.
</para>
<para>
The <varname>name</varname> argument is the name of the derivation output, which defaults to <varname>fromImage.name</varname>.
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<title>shadowSetup</title>
<para>
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a <varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like in the example below:
</para>
<example xml:id='ex-dockerTools-shadowSetup'>
<title>Shadow base files</title>
<programlisting>
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
</programlisting>
</example>
<para>
Creating base files like <literal>/etc/passwd</literal> or <literal>/etc/login.defs</literal> is necessary for shadow-utils to manipulate users and groups.
</para>
</section>
</section>

View File

@ -0,0 +1,62 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-ociTools">
<title>pkgs.ociTools</title>
<para>
<varname>pkgs.ociTools</varname> is a set of functions for creating containers according to the <link xlink:href="https://github.com/opencontainers/runtime-spec">OCI container specification v1.0.0</link>. Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
</para>
<section xml:id="ssec-pkgs-ociTools-buildContainer">
<title>buildContainer</title>
<para>
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a <varname>config.json</varname> and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
</para>
<para>
The parameters of <varname>buildContainer</varname> with an example value are described below:
</para>
<example xml:id='ex-ociTools-buildContainer'>
<title>Build Container</title>
<programlisting>
buildContainer {
args = [ (with pkgs; writeScript "run.sh" ''
#!${bash}/bin/bash
exec ${bash}/bin/bash
'').outPath ]; <co xml:id='ex-ociTools-buildContainer-1' />
mounts = {
"/data" = {
type = "none";
source = "/var/lib/mydata";
options = [ "bind" ];
};
};<co xml:id='ex-ociTools-buildContainer-2' />
readonly = false; <co xml:id='ex-ociTools-buildContainer-3' />
}
</programlisting>
<calloutlist>
<callout arearefs='ex-ociTools-buildContainer-1'>
<para>
<varname>args</varname> specifies a set of arguments to run inside the container. This is the only required argument for <varname>buildContainer</varname>. All referenced packages inside the derivation will be made available inside the container
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-2'>
<para>
<varname>mounts</varname> specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-3'>
<para>
<varname>readonly</varname> makes the container's rootfs read-only if it is set to true. The default value is false <literal>false</literal>.
</para>
</callout>
</calloutlist>
</example>
</section>
</section>

View File

@ -0,0 +1,59 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-snapTools">
<title>pkgs.snapTools</title>
<para>
<varname>pkgs.snapTools</varname> is a set of functions for creating Snapcraft images. Snap and Snapcraft is not used to perform these operations.
</para>
<section xml:id="ssec-pkgs-snapTools-makeSnap-signature">
<title>The makeSnap Function</title>
<para>
<function>makeSnap</function> takes a single named argument, <parameter>meta</parameter>. This argument mirrors <link xlink:href="https://docs.snapcraft.io/snap-format">the upstream <filename>snap.yaml</filename> format</link> exactly.
</para>
<para>
The <parameter>base</parameter> should not be be specified, as <function>makeSnap</function> will force set it.
</para>
<para>
Currently, <function>makeSnap</function> does not support creating GUI stubs.
</para>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-hello">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-hello">
<title>Making a Hello World Snap</title>
<para>
The following expression packages GNU Hello as a Snapcraft snap.
</para>
<programlisting><xi:include href="./snap/example-hello.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with <command>snap install ./result --dangerous</command>. <command>hello</command> will now be the Snapcraft version of the package.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-firefox">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-firefox">
<title>Making a Graphical Snap</title>
<para>
Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
</para>
<programlisting><xi:include href="./snap/example-firefox.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with <command>snap install ./result --dangerous</command>. <command>nix-example-firefox</command> will now be the Snapcraft version of the Firefox package.
</para>
<para>
The specific meaning behind plugs can be looked up in the <link xlink:href="https://docs.snapcraft.io/supported-interfaces">Snapcraft interface documentation</link>.
</para>
</example>
</section>
</section>

View File

@ -0,0 +1,44 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-citrix">
<title>Citrix Workspace</title>
<para>
<note>
<para>
Please note that the <literal>citrix_receiver</literal> package has been deprecated since its development was <link xlink:href="https://docs.citrix.com/en-us/citrix-workspace-app.html">discontinued by upstream</link> and has been replaced by <link xlink:href="https://www.citrix.com/products/workspace-app/">the citrix workspace app</link>.
</para>
</note>
<link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> and <link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link> are a remote desktop viewers which provide access to <link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the license agreements of the vendor for <link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">Citrix Receiver</link> or <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link> need to be accepted first. Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>. With the archive available in the store the package can be built and installed with Nix.
</para>
<warning>
<title>Caution with <command>nix-shell</command> installs</title>
<para>
It's recommended to install <literal>Citrix Receiver</literal> and/or <literal>Citrix Workspace</literal> using <literal>nix-env -i</literal> or globally to ensure that the <literal>.desktop</literal> files are installed properly into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to open <literal>.ica</literal> files automatically from the browser to start a Citrix connection.
</para>
</warning>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Workspace App</literal> in <literal>nixpkgs</literal> trust several certificates <link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in <link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>, however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_workspace.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>

View File

@ -0,0 +1,24 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="dlib">
<title>DLib</title>
<para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which provides several machine learning algorithms.
</para>
<section xml:id="compiling-without-avx-support">
<title>Compiling without AVX support</title>
<para>
Especially older CPUs don't support <link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link> (<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to optimize their algorithms.
</para>
<para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur. In those cases AVX support needs to be disabled:
<programlisting>self: super: {
dlib = super.dlib.override { avxSupport = false; };
}</programlisting>
</para>
</section>
</section>

View File

@ -0,0 +1,72 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-eclipse">
<title>Eclipse</title>
<para>
The Nix expressions related to the Eclipse platform and IDE are in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse"><filename>pkgs/applications/editors/eclipse</filename></link>.
</para>
<para>
Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses --description
</screen>
Once an Eclipse variant is installed it can be run using the <command>eclipse</command> command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
</para>
<para>
If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an <emphasis>Eclipse environment</emphasis>. This type of environment is created using the function <varname>eclipseWithPlugins</varname> found inside the <varname>nixpkgs.eclipses</varname> attribute set. This function takes as argument <literal>{ eclipse, plugins ? [], jvmArgs ? [] }</literal> where <varname>eclipse</varname> is a one of the Eclipse packages described above, <varname>plugins</varname> is a list of plugin derivations, and <varname>jvmArgs</varname> is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [ plugins.color-theme ];
};
}
</screen>
to your Nixpkgs configuration (<filename>~/.config/nixpkgs/config.nix</filename>) and install it by running <command>nix-env -f '&lt;nixpkgs&gt;' -iA myEclipse</command> and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using <varname>eclipseWithPlugins</varname> by running
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses.plugins --description
</screen>
</para>
<para>
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the <varname>buildEclipseUpdateSite</varname> and <varname>buildEclipsePlugin</varname> functions found in the <varname>nixpkgs.eclipses.plugins</varname> attribute set. Use the <varname>buildEclipseUpdateSite</varname> function to install a plugin distributed as an Eclipse update site. This function takes <literal>{ name, src }</literal> as argument where <literal>src</literal> indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the <varname>buildEclipsePlugin</varname> function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument <literal>{ name, srcFeature, srcPlugin }</literal> where <literal>srcFeature</literal> and <literal>srcPlugin</literal> are the feature and plugin JARs, respectively.
</para>
<para>
Expanding the previous example with two plugins using the above functions we have
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http://…/features/myplugin1.jar";
sha256 = "123…";
};
srcPlugin = fetchurl {
url = "http://…/plugins/myplugin1.jar";
sha256 = "123…";
};
});
(plugins.buildEclipseUpdateSite {
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http://…/myplugin2.zip";
sha256 = "123…";
};
});
];
};
}
</screen>
</para>
</section>

View File

@ -0,0 +1,17 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-elm">
<title>Elm</title>
<para>
To start a development environment do <command>nix-shell -p elmPackages.elm elmPackages.elm-format</command>
</para>
<para>
To update Elm compiler, see <filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
</para>
<para>
To package Elm applications, <link xlink:href="https://github.com/hercules-ci/elm2nix#elm2nix">read about elm2nix</link>.
</para>
</section>

View File

@ -0,0 +1,131 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to configure. <varname>emacsWithPackages</varname> allows you to manage packages from ELPA. This means that you will not have to install that packages from within Emacs. For instance, if you wanted to use <literal>company</literal>, <literal>counsel</literal>, <literal>flycheck</literal>, <literal>ivy</literal>, <literal>magit</literal>, <literal>projectile</literal>, and <literal>use-package</literal> you could use this as a <filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA myEmacs</command>. However, this will only install those packages. It will not <literal>configure</literal> them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a <filename>default.el</filename> file in <filename>/share/emacs/site-start/</filename>. Emacs knows to load this file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing <command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use <varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>

View File

@ -0,0 +1,57 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-ibus-typing-booster">
<title>ibus-engines.typing-booster</title>
<para>
This package is an ibus-based completion method to speed up typing.
</para>
<section xml:id="sec-ibus-typing-booster-activate">
<title>Activating the engine</title>
<para>
IBus needs to be configured accordingly to activate <literal>typing-booster</literal>. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the <link xlink:href="https://mike-fabian.github.io/ibus-typing-booster/documentation.html">upstream docs</link>.
</para>
<para>
On NixOS you need to explicitly enable <literal>ibus</literal> with given engines before customizing your desktop to use <literal>typing-booster</literal>. This can be achieved using the <literal>ibus</literal> module:
<programlisting>{ pkgs, ... }: {
i18n.inputMethod = {
enabled = "ibus";
ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
};
}</programlisting>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-customize-hunspell">
<title>Using custom hunspell dictionaries</title>
<para>
The IBus engine is based on <literal>hunspell</literal> to support completion in many languages. By default the dictionaries <literal>de-de</literal>, <literal>en-us</literal>, <literal>fr-moderne</literal> <literal>es-es</literal>, <literal>it-it</literal>, <literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add another dictionary, the package can be overridden like this:
<programlisting>ibus-engines.typing-booster.override {
langs = [ "de-at" "en-gb" ];
}</programlisting>
</para>
<para>
<emphasis>Note: each language passed to <literal>langs</literal> must be an attribute name in <literal>pkgs.hunspellDicts</literal>.</emphasis>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-emoji-picker">
<title>Built-in emoji picker</title>
<para>
The <literal>ibus-engines.typing-booster</literal> package contains a program named <literal>emoji-picker</literal>. To display all emojis correctly, a special font such as <literal>noto-fonts-emoji</literal> is needed:
</para>
<para>
On NixOS it can be installed using the following expression:
<programlisting>{ pkgs, ... }: {
fonts.fonts = with pkgs; [ noto-fonts-emoji ];
}</programlisting>
</para>
</section>
</section>

View File

@ -0,0 +1,24 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-packages">
<title>Packages</title>
<para>
This chapter contains information about how to use and maintain the Nix expressions for a number of specific packages, such as the Linux kernel or X.org.
</para>
<xi:include href="citrix.xml" />
<xi:include href="dlib.xml" />
<xi:include href="eclipse.xml" />
<xi:include href="elm.xml" />
<xi:include href="emacs.xml" />
<xi:include href="ibus.xml" />
<xi:include href="kakoune.xml" />
<xi:include href="linux.xml" />
<xi:include href="locales.xml" />
<xi:include href="nginx.xml" />
<xi:include href="opengl.xml" />
<xi:include href="shell-helpers.xml" />
<xi:include href="steam.xml" />
<xi:include href="urxvt.xml" />
<xi:include href="weechat.xml" />
<xi:include href="xorg.xml" />
</chapter>

View File

@ -0,0 +1,14 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-kakoune">
<title>Kakoune</title>
<para>
Kakoune can be built to autoload plugins:
<programlisting>(kakoune.override {
configure = {
plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
};
})</programlisting>
</para>
</section>

View File

@ -0,0 +1,85 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-linux-kernel">
<title>Linux kernel</title>
<para>
The Nix expressions to build the Linux kernel are in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel"><filename>pkgs/os-specific/linux/kernel</filename></link>.
</para>
<para>
The function that builds the kernel has an argument <varname>kernelPatches</varname> which should be a list of <literal>{name, patch, extraConfig}</literal> attribute sets, where <varname>name</varname> is the name of the patch (which is included in the kernels <varname>meta.description</varname> attribute), <varname>patch</varname> is the patch itself (possibly compressed), and <varname>extraConfig</varname> (optional) is a string specifying extra options to be concatenated to the kernel configuration file (<filename>.config</filename>).
</para>
<para>
The kernel derivation exports an attribute <varname>features</varname> specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the <varname>iwlwifi</varname> feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external <varname>iwlwifi</varname> package:
<programlisting>
modulesTree = [kernel]
++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi
++ ...;
</programlisting>
</para>
<para>
How to add a new (major) version of the Linux kernel to Nixpkgs:
<orderedlist>
<listitem>
<para>
Copy the old Nix expression (e.g. <filename>linux-2.6.21.nix</filename>) to the new one (e.g. <filename>linux-2.6.22.nix</filename>) and update it.
</para>
</listitem>
<listitem>
<para>
Add the new kernel to <filename>all-packages.nix</filename> (e.g., create an attribute <varname>kernel_2_6_22</varname>).
</para>
</listitem>
<listitem>
<para>
Now were going to update the kernel configuration. First unpack the kernel. Then for each supported platform (<literal>i686</literal>, <literal>x86_64</literal>, <literal>uml</literal>) do the following:
<orderedlist>
<listitem>
<para>
Make an copy from the old config (e.g. <filename>config-2.6.21-i686-smp</filename>) to the new one (e.g. <filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
<listitem>
<para>
Copy the config file for this platform (e.g. <filename>config-2.6.22-i686-smp</filename>) to <filename>.config</filename> in the kernel source tree.
</para>
</listitem>
<listitem>
<para>
Run <literal>make oldconfig ARCH=<replaceable>{i386,x86_64,um}</replaceable></literal> and answer all questions. (For the uml configuration, also add <literal>SHELL=bash</literal>.) Make sure to keep the configuration consistent between platforms (i.e. dont enable some feature on <literal>i686</literal> and disable it on <literal>x86_64</literal>).
</para>
</listitem>
<listitem>
<para>
If needed you can also run <literal>make menuconfig</literal>:
<screen>
<prompt>$ </prompt>nix-env -i ncurses
<prompt>$ </prompt>export NIX_CFLAGS_LINK=-lncurses
<prompt>$ </prompt>make menuconfig ARCH=<replaceable>arch</replaceable></screen>
</para>
</listitem>
<listitem>
<para>
Copy <filename>.config</filename> over the new config file (e.g. <filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
</orderedlist>
</para>
</listitem>
<listitem>
<para>
Test building the kernel: <literal>nix-build -A kernel_2_6_22</literal>. If it compiles, ship it! For extra credit, try booting NixOS with it.
</para>
</listitem>
<listitem>
<para>
It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the <varname>linuxPackagesFor</varname> function in <filename>all-packages.nix</filename> (such as the NVIDIA drivers, AUFS, etc.). If the updated packages arent backwards compatible with older kernels, you may need to keep the older versions around.
</para>
</listitem>
</orderedlist>
</para>
</section>

View File

@ -0,0 +1,13 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="locales">
<title>Locales</title>
<para>
To allow simultaneous use of packages linked against different versions of <literal>glibc</literal> with different locale archive formats Nixpkgs patches <literal>glibc</literal> to rely on <literal>LOCALE_ARCHIVE</literal> environment variable.
</para>
<para>
On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the <literal>LOCALE_ARCHIVE</literal> variable pointing to <literal>${glibcLocales}/lib/locale/locale-archive</literal>. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters <literal>allLocales</literal> and <literal>locales</literal> of the package.
</para>
</section>

View File

@ -0,0 +1,25 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-nginx">
<title>Nginx</title>
<para>
<link xlink:href="https://nginx.org/">Nginx</link> is a reverse proxy and lightweight webserver.
</para>
<section xml:id="sec-nginx-etag">
<title>ETags on static files served from the Nix store</title>
<para>
HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the <link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><literal>Last-Modified</literal></link> response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the <literal>Last-Modified</literal> header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
</para>
<para>
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the <link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag"><literal>ETag</literal></link> response header. The value of the <literal>ETag</literal> header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an <literal>If-None-Match</literal> header. If the ETag value is unchanged, then the server does not need to resend the content.
</para>
<para>
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of <filename>/nix/store</filename>, the hash in the store path is used as the <literal>ETag</literal> header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.
</para>
</section>
</section>

View File

@ -0,0 +1,9 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-opengl">
<title>OpenGL</title>
<para>
Packages that use OpenGL have NixOS desktop as their primary target. The current solution for loading the GPU-specific drivers is based on <literal>libglvnd</literal> and looks for the driver implementation in <literal>LD_LIBRARY_PATH</literal>. If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of <literal>libglvnd</literal> and <literal>mesa_drivers</literal> in <literal>LD_LIBRARY_PATH</literal>. For proprietary video drivers you might have luck with also adding the corresponding video driver package.
</para>
</section>

View File

@ -0,0 +1,25 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-shell-helpers">
<title>Interactive shell helpers</title>
<para>
Some packages provide the shell integration to be more useful. But unlike other systems, nix doesn't have a standard share directory location. This is why a bunch <command>PACKAGE-share</command> scripts are shipped that print the location of the corresponding shared folder. Current list of such packages is as following:
<itemizedlist>
<listitem>
<para>
<literal>autojump</literal>: <command>autojump-share</command>
</para>
</listitem>
<listitem>
<para>
<literal>fzf</literal>: <command>fzf-share</command>
</para>
</listitem>
</itemizedlist>
E.g. <literal>autojump</literal> can then used in the .bashrc like this:
<screen>
source "$(autojump-share)/autojump.bash"
</screen>
</para>
</section>

View File

@ -0,0 +1,131 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called <filename>steam</filename> that in Ubuntu (their target distro) would go to <filename>/usr/bin </filename>. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented <link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>. This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration. To use the Steam controller or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS 17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at <link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>

View File

@ -0,0 +1,13 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and developers) of Nixpkgs want to limit and tightly control their exposure to unfree software. At the same time, many users need (or want) to run some specific pieces of proprietary software. Nixpkgs includes some expressions for unfree software packages. By default unfree software cannot be installed and doesnt show up in searches. To allow installing unfree software in a single Nix invocation one can export <literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining <literal>allowUnfreePredicate</literal> function in config; it takes the <literal>mkDerivation</literal> parameter attrset and returns <literal>true</literal> for unfree packages that should be allowed.
</para>
</section>

View File

@ -0,0 +1,101 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-urxvt">
<title>Urxvt</title>
<para>
Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
</para>
<section xml:id="sec-urxvt-conf">
<title>Configuring urxvt</title>
<para>
In <literal>nixpkgs</literal>, urxvt is provided by the package
<literal>rxvt-unicode</literal>. It can be configured to include your choice
of plugins, reducing its closure size from the default configuration which
includes all available plugins. To make use of this functionality, use an
overlay or directly install an expression that overrides its configuration,
such as
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
plugins = with availablePlugins; [ perls resize-font vtwheel ];
}
}</programlisting>
If the <literal>configure</literal> function returns an attrset without the
<literal>plugins</literal> attribute, <literal>availablePlugins</literal>
will be used automatically.
</para>
<para>
In order to add plugins but also keep all default plugins installed, it is
possible to use the following method:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
plugins = (builtins.attrValues availablePlugins) ++ [ custom-plugin ];
};
}</programlisting>
</para>
<para>
To get a list of all the plugins available, open the Nix REPL and run
<programlisting>$ nix repl
:l &lt;nixpkgs&gt;
map (p: p.name) pkgs.rxvt-unicode.plugins
</programlisting>
Alternatively, if your shell is bash or zsh and have completion enabled,
simply type <literal>nixpkgs.rxvt-unicode.plugins.&lt;tab&gt;</literal>.
</para>
<para>
In addition to <literal>plugins</literal> the options
<literal>extraDeps</literal> and <literal>perlDeps</literal> can be used
to install extra packages.
<literal>extraDeps</literal> can be used, for example, to provide
<literal>xsel</literal> (a clipboard manager) to the clipboard plugin,
without installing it globally:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
pluginsDeps = [ xsel ];
}
}</programlisting>
<literal>perlDeps</literal> is a handy way to provide Perl packages to
your custom plugins (in <literal>$HOME/.urxvt/ext</literal>). For example,
if you need <literal>AnyEvent</literal> you can do:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
perlDeps = with perlPackages; [ AnyEvent ];
}
}</programlisting>
</para>
</section>
<section xml:id="sec-urxvt-pkg">
<title>Packaging urxvt plugins</title>
<para>
Urxvt plugins resides in
<literal>pkgs/applications/misc/rxvt-unicode-plugins</literal>.
To add a new plugin create an expression in a subdirectory and add the
package to the set in
<literal>pkgs/applications/misc/rxvt-unicode-plugins/default.nix</literal>.
</para>
<para>
A plugin can be any kind of derivation, the only requirement is that it
should always install perl scripts in <literal>$out/lib/urxvt/perl</literal>.
Look for existing plugins for examples.
</para>
<para>
If the plugin is itself a perl package that needs to be imported from
other plugins or scripts, add the following passthrough:
<programlisting>passthru.perlPackages = [ "self" ];
</programlisting>
This will make the urxvt wrapper pick up the dependency and set up the perl
path accordingly.
</para>
</section>
</section>

View File

@ -0,0 +1,85 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-weechat">
<title>Weechat</title>
<para>
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
<programlisting>weechat.override {configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [ python perl ];
}
}</programlisting>
If the <literal>configure</literal> function returns an attrset without the <literal>plugins</literal> attribute, <literal>availablePlugins</literal> will be used automatically.
</para>
<para>
The plugins currently available are <literal>python</literal>, <literal>perl</literal>, <literal>ruby</literal>, <literal>guile</literal>, <literal>tcl</literal> and <literal>lua</literal>.
</para>
<para>
The python and perl plugins allows the addition of extra libraries. For instance, the <literal>inotify.py</literal> script in weechat-scripts requires D-Bus or libnotify, and the <literal>fish.py</literal> script requires pycrypto. To use these scripts, use the plugin's <literal>withPackages</literal> attribute:
<programlisting>weechat.override { configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [
(python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
];
};
}
</programlisting>
</para>
<para>
In order to also keep all default plugins installed, it is possible to use the following method:
<programlisting>weechat.override { configure = { availablePlugins, ... }: {
plugins = builtins.attrValues (availablePlugins // {
python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
});
}; }
</programlisting>
</para>
<para>
WeeChat allows to set defaults on startup using the <literal>--run-command</literal>. The <literal>configure</literal> method can be used to pass commands to the program:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add freenode chat.freenode.org
'';
};
}</programlisting>
Further values can be added to the list of commands when running <literal>weechat --run-command "your-commands"</literal>.
</para>
<para>
Additionally it's possible to specify scripts to be loaded when starting <literal>weechat</literal>. These will be loaded before the commands from <literal>init</literal>:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
scripts = with pkgs.weechatScripts; [
weechat-xmpp weechat-matrix-bridge wee-slack
];
init = ''
/set plugins.var.python.jabber.key "val"
'':
};
}</programlisting>
</para>
<para>
In <literal>nixpkgs</literal> there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a <literal>passthru.scripts</literal> attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in <literal>$out/share</literal>. An exemplary derivation looks like this:
<programlisting>{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "exemplary-weechat-script";
src = fetchurl {
url = "https://scripts.tld/your-scripts.tar.gz";
sha256 = "...";
};
passthru.scripts = [ "foo.py" "bar.lua" ];
installPhase = ''
mkdir $out/share
cp foo.py $out/share
cp bar.lua $out/share
'';
}</programlisting>
</para>
</section>

View File

@ -0,0 +1,34 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-xorg">
<title>X.org</title>
<para>
The Nix expressions for the X.org packages reside in <filename>pkgs/servers/x11/xorg/default.nix</filename>. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file <filename>pkgs/servers/x11/xorg/overrides.nix</filename>, in which you can override or add to the derivations produced by the generator.
</para>
<para>
The generator is invoked as follows:
<screen>
<prompt>$ </prompt>cd pkgs/servers/x11/xorg
<prompt>$ </prompt>cat tarballs-7.5.list extra.list old.list \
| perl ./generate-expr-from-tarballs.pl
</screen>
For each of the tarballs in the <filename>.list</filename> files, the script downloads it, unpacks it, and searches its <filename>configure.ac</filename> and <filename>*.pc.in</filename> files for dependencies. This information is used to generate <filename>default.nix</filename>. The generator caches downloaded tarballs between runs. Pay close attention to the <literal>NOT FOUND: <replaceable>name</replaceable></literal> messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
</para>
<para>
A file like <filename>tarballs-7.5.list</filename> contains all tarballs in a X.org release. It can be generated like this:
<screen>
<prompt>$ </prompt>export i="mirror://xorg/X11R7.4/src/everything/"
<prompt>$ </prompt>cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
| perl -e 'while (&lt;>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
| sort > tarballs-7.4.list
</screen>
<filename>extra.list</filename> contains libraries that arent part of X.org proper, but are closely related to it, such as <literal>libxcb</literal>. <filename>old.list</filename> contains some packages that were removed from X.org, but are still needed by some people or by other packages (such as <varname>imake</varname>).
</para>
<para>
If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, <varname>patches</varname> or a <varname>postInstall</varname> hook), you should modify <filename>pkgs/servers/x11/xorg/overrides.nix</filename>.
</para>
</section>

10
doc/builders/special.xml Normal file
View File

@ -0,0 +1,10 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-special">
<title>Special builders</title>
<para>
This chapter describes several special builders.
</para>
<xi:include href="special/fhs-environments.xml" />
<xi:include href="special/mkshell.xml" />
</chapter>

View File

@ -0,0 +1,122 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-fhs-environments">
<title>buildFHSUserEnv</title>
<para>
<function>buildFHSUserEnv</function> provides a way to build and run FHS-compatible lightweight sandboxes. It creates an isolated root with bound <filename>/nix/store</filename>, so its footprint in terms of disk space needed is quite small. This allows one to run software which is hard or unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions, games distributed as tarballs, software with integrity checking and/or external self-updated binaries. It uses Linux namespaces feature to create temporary lightweight environments which are destroyed after all child processes exit, without root user rights requirement. Accepted arguments are:
</para>
<variablelist>
<varlistentry>
<term>
<literal>name</literal>
</term>
<listitem>
<para>
Environment name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>targetPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>multiPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the directory structure.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommandsMulti</literal>
</term>
<listitem>
<para>
Like <literal>extraBuildCommands</literal>, but executed only on multilib architectures.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraOutputsToInstall</literal>
</term>
<listitem>
<para>
Additional derivation outputs to be linked for both target and multi-architecture packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraInstallCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the derivation with runner script.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>runScript</literal>
</term>
<listitem>
<para>
A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to <literal>bash</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
One can create a simple environment using a <literal>shell.nix</literal> like that:
</para>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
]]></programlisting>
<para>
Running <literal>nix-shell</literal> would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change <literal>runScript</literal> to the application path, e.g. <filename>./bin/start.sh</filename> -- relative paths are supported.
</para>
</section>

View File

@ -0,0 +1,24 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-mkShell">
<title>pkgs.mkShell</title>
<para>
<function>pkgs.mkShell</function> is a special kind of derivation that is only useful when using it combined with <command>nix-shell</command>. It will in fact fail to instantiate when invoked with <command>nix-build</command>.
</para>
<section xml:id="sec-pkgs-mkShell-usage">
<title>Usage</title>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# this will make all the build inputs from hello and gnutar
# available to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
buildInputs = [ pkgs.gnumake ];
}
]]></programlisting>
</section>
</section>

View File

@ -0,0 +1,90 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-trivial-builders">
<title>Trivial builders</title>
<para>
Nixpkgs provides a couple of functions that help with building derivations. The most important one, <function>stdenv.mkDerivation</function>, has already been documented above. The following functions wrap <function>stdenv.mkDerivation</function>, making it easier to use in certain cases.
</para>
<variablelist>
<varlistentry xml:id="trivial-builder-runCommand">
<term>
<literal>runCommand</literal>
</term>
<listitem>
<para>
This takes three arguments, <literal>name</literal>, <literal>env</literal>, and <literal>buildCommand</literal>. <literal>name</literal> is just the name that Nix will append to the store path in the same way that <literal>stdenv.mkDerivation</literal> uses its <literal>name</literal> attribute. <literal>env</literal> is an attribute set specifying environment variables that will be set for this derivation. These attributes are then passed to the wrapped <literal>stdenv.mkDerivation</literal>. <literal>buildCommand</literal> specifies the commands that will be run to create this derivation. Note that you will need to create <literal>$out</literal> for Nix to register the command as successful.
</para>
<para>
An example of using <literal>runCommand</literal> is provided below.
</para>
<programlisting>
(import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out
echo I can write data to the Nix store > $out/message
echo I can also run basic commands like:
echo ls
ls
echo whoami
whoami
echo date
date
''
</programlisting>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-runCommandCC">
<term>
<literal>runCommandCC</literal>
</term>
<listitem>
<para>
This works just like <literal>runCommand</literal>. The only difference is that it also provides a C compiler in <literal>buildCommand</literal>s environment. To minimize your dependencies, you should only use this if you are sure you will need a C compiler as part of running your command.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-runCommandLocal">
<term>
<literal>runCommandLocal</literal>
</term>
<listitem>
<para>
Variant of <literal>runCommand</literal> that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (&lt;1s execution time). It saves on the network roundrip and can speed up a build.
</para>
<note><para>
This sets <link xlink:href="https://nixos.org/nix/manual/#adv-attr-allowSubstitutes"><literal>allowSubstitutes</literal> to <literal>false</literal></link>, so only use <literal>runCommandLocal</literal> if you are certain the user will always have a builder for the <literal>system</literal> of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the <literal>system</literal> is usually the same as <literal>builtins.currentSystem</literal>.
</para></note>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-writeText">
<term>
<literal>writeTextFile</literal>, <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, <literal>writeScriptBin</literal>
</term>
<listitem>
<para>
These functions write <literal>text</literal> to the Nix store. This is useful for creating scripts from Nix expressions. <literal>writeTextFile</literal> takes an attribute set and expects two arguments, <literal>name</literal> and <literal>text</literal>. <literal>name</literal> corresponds to the name used in the Nix store path. <literal>text</literal> will be the contents of the file. You can also set <literal>executable</literal> to true to make this file have the executable bit set.
</para>
<para>
Many more commands wrap <literal>writeTextFile</literal> including <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, and <literal>writeScriptBin</literal>. These are convenience functions over <literal>writeTextFile</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-symlinkJoin">
<term>
<literal>symlinkJoin</literal>
</term>
<listitem>
<para>
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, <literal>name</literal>, and <literal>paths</literal>. <literal>name</literal> is the name used in the Nix store path for the created derivation. <literal>paths</literal> is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
</para>
</listitem>
</varlistentry>
</variablelist>
</chapter>

File diff suppressed because it is too large Load Diff

View File

@ -1,544 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-packageconfig">
<title>Global configuration</title>
<para>
Nix comes with certain defaults about what packages can and cannot be
installed, based on a package's metadata. By default, Nix will prevent
installation if any of the following criteria are true:
</para>
<itemizedlist>
<listitem>
<para>
The package is thought to be broken, and has had its
<literal>meta.broken</literal> set to <literal>true</literal>.
</para>
</listitem>
<listitem>
<para>
The package isn't intended to run on the given system, as none of its
<literal>meta.platforms</literal> match the given system.
</para>
</listitem>
<listitem>
<para>
The package's <literal>meta.license</literal> is set to a license which is
considered to be unfree.
</para>
</listitem>
<listitem>
<para>
The package has known security vulnerabilities but has not or can not be
updated for some reason, and a list of issues has been entered in to the
package's <literal>meta.knownVulnerabilities</literal>.
</para>
</listitem>
</itemizedlist>
<para>
Note that all this is checked during evaluation already, and the check
includes any package that is evaluated. In particular, all build-time
dependencies are checked. <literal>nix-env -qa</literal> will (attempt to)
hide any packages that would be refused.
</para>
<para>
Each of these criteria can be altered in the nixpkgs configuration.
</para>
<para>
The nixpkgs configuration for a NixOS system is set in the
<literal>configuration.nix</literal>, as in the following example:
<programlisting>
{
nixpkgs.config = {
allowUnfree = true;
};
}
</programlisting>
However, this does not allow unfree software for individual users. Their
configurations are managed separately.
</para>
<para>
A user's of nixpkgs configuration is stored in a user-specific configuration
file located at <filename>~/.config/nixpkgs/config.nix</filename>. For
example:
<programlisting>
{
allowUnfree = true;
}
</programlisting>
</para>
<para>
Note that we are not able to test or build unfree software on Hydra due to
policy. Most unfree licenses prohibit us from either executing or
distributing the software.
</para>
<section xml:id="sec-allow-broken">
<title>Installing broken packages</title>
<para>
There are two ways to try compiling a package which has been marked as
broken.
</para>
<itemizedlist>
<listitem>
<para>
For allowing the build of a broken package once, you can use an
environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_BROKEN=1</programlisting>
</para>
</listitem>
<listitem>
<para>
For permanently allowing broken packages to be built, you may add
<literal>allowBroken = true;</literal> to your user's configuration file,
like this:
<programlisting>
{
allowBroken = true;
}
</programlisting>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-allow-unsupported-system">
<title>Installing packages on unsupported systems</title>
<para>
There are also two ways to try compiling a package which has been marked as
unsuported for the given system.
</para>
<itemizedlist>
<listitem>
<para>
For allowing the build of a broken package once, you can use an
environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1</programlisting>
</para>
</listitem>
<listitem>
<para>
For permanently allowing broken packages to be built, you may add
<literal>allowUnsupportedSystem = true;</literal> to your user's
configuration file, like this:
<programlisting>
{
allowUnsupportedSystem = true;
}
</programlisting>
</para>
</listitem>
</itemizedlist>
<para>
The difference between a package being unsupported on some system and being
broken is admittedly a bit fuzzy. If a program <emphasis>ought</emphasis> to
work on a certain platform, but doesn't, the platform should be included in
<literal>meta.platforms</literal>, but marked as broken with e.g.
<literal>meta.broken = !hostPlatform.isWindows</literal>. Of course, this
begs the question of what "ought" means exactly. That is left to the package
maintainer.
</para>
</section>
<section xml:id="sec-allow-unfree">
<title>Installing unfree packages</title>
<para>
There are several ways to tweak how Nix handles a package which has been
marked as unfree.
</para>
<itemizedlist>
<listitem>
<para>
To temporarily allow all unfree packages, you can use an environment
variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNFREE=1</programlisting>
</para>
</listitem>
<listitem>
<para>
It is possible to permanently allow individual unfree packages, while
still blocking unfree packages by default using the
<literal>allowUnfreePredicate</literal> configuration option in the user
configuration file.
</para>
<para>
This option is a function which accepts a package as a parameter, and
returns a boolean. The following example configuration accepts a package
and always returns false:
<programlisting>
{
allowUnfreePredicate = (pkg: false);
}
</programlisting>
</para>
<para>
For a more useful example, try the following. This configuration only
allows unfree packages named flash player and visual studio code:
<programlisting>
{
allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [
"flashplayer"
"vscode"
];
}
</programlisting>
</para>
</listitem>
<listitem>
<para>
It is also possible to whitelist and blacklist licenses that are
specifically acceptable or not acceptable, using
<literal>whitelistedLicenses</literal> and
<literal>blacklistedLicenses</literal>, respectively.
</para>
<para>
The following example configuration whitelists the licenses
<literal>amd</literal> and <literal>wtfpl</literal>:
<programlisting>
{
whitelistedLicenses = with stdenv.lib.licenses; [ amd wtfpl ];
}
</programlisting>
</para>
<para>
The following example configuration blacklists the <literal>gpl3</literal>
and <literal>agpl3</literal> licenses:
<programlisting>
{
blacklistedLicenses = with stdenv.lib.licenses; [ agpl3 gpl3 ];
}
</programlisting>
</para>
</listitem>
</itemizedlist>
<para>
A complete list of licenses can be found in the file
<filename>lib/licenses.nix</filename> of the nixpkgs tree.
</para>
</section>
<section xml:id="sec-allow-insecure">
<title>Installing insecure packages</title>
<para>
There are several ways to tweak how Nix handles a package which has been
marked as insecure.
</para>
<itemizedlist>
<listitem>
<para>
To temporarily allow all insecure packages, you can use an environment
variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_INSECURE=1</programlisting>
</para>
</listitem>
<listitem>
<para>
It is possible to permanently allow individual insecure packages, while
still blocking other insecure packages by default using the
<literal>permittedInsecurePackages</literal> configuration option in the
user configuration file.
</para>
<para>
The following example configuration permits the installation of the
hypothetically insecure package <literal>hello</literal>, version
<literal>1.2.3</literal>:
<programlisting>
{
permittedInsecurePackages = [
"hello-1.2.3"
];
}
</programlisting>
</para>
</listitem>
<listitem>
<para>
It is also possible to create a custom policy around which insecure
packages to allow and deny, by overriding the
<literal>allowInsecurePredicate</literal> configuration option.
</para>
<para>
The <literal>allowInsecurePredicate</literal> option is a function which
accepts a package and returns a boolean, much like
<literal>allowUnfreePredicate</literal>.
</para>
<para>
The following configuration example only allows insecure packages with
very short names:
<programlisting>
{
allowInsecurePredicate = pkg: builtins.stringLength (lib.getName pkg) &lt;= 5;
}
</programlisting>
</para>
<para>
Note that <literal>permittedInsecurePackages</literal> is only checked if
<literal>allowInsecurePredicate</literal> is not specified.
</para>
</listitem>
</itemizedlist>
</section>
<!--============================================================-->
<section xml:id="sec-modify-via-packageOverrides">
<title>Modify packages via <literal>packageOverrides</literal></title>
<para>
You can define a function called <varname>packageOverrides</varname> in your
local <filename>~/.config/nixpkgs/config.nix</filename> to override Nix
packages. It must be a function that takes pkgs as an argument and returns a
modified set of packages.
<programlisting>
{
packageOverrides = pkgs: rec {
foo = pkgs.foo.override { ... };
};
}
</programlisting>
</para>
</section>
<section xml:id="sec-declarative-package-management">
<title>Declarative Package Management</title>
<section xml:id="sec-building-environment">
<title>Build an environment</title>
<para>
Using <literal>packageOverrides</literal>, it is possible to manage
packages declaratively. This means that we can list all of our desired
packages within a declarative Nix expression. For example, to have
<literal>aspell</literal>, <literal>bc</literal>,
<literal>ffmpeg</literal>, <literal>coreutils</literal>,
<literal>gdb</literal>, <literal>nixUnstable</literal>,
<literal>emscripten</literal>, <literal>jq</literal>,
<literal>nox</literal>, and <literal>silver-searcher</literal>, we could
use the following in <filename>~/.config/nixpkgs/config.nix</filename>:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
gdb
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
};
};
}
</screen>
<para>
To install it into our environment, you can just run <literal>nix-env -iA
nixpkgs.myPackages</literal>. If you want to load the packages to be built
from a working copy of <literal>nixpkgs</literal> you just run
<literal>nix-env -f. -iA myPackages</literal>. To explore what's been
installed, just look through <filename>~/.nix-profile/</filename>. You can
see that a lot of stuff has been installed. Some of this stuff is useful
some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
gdb
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share" "/bin" ];
};
};
}
</screen>
<para>
<literal>pathsToLink</literal> tells Nixpkgs to only link the paths listed
which gets rid of the extra stuff in the profile. <filename>/bin</filename>
and <filename>/share</filename> are good defaults for a user environment,
getting rid of the clutter. If you are running on Nix on MacOS, you may
want to add another path as well, <filename>/Applications</filename>, that
makes GUI apps available.
</para>
</section>
<section xml:id="sec-getting-documentation">
<title>Getting documentation</title>
<para>
After building that new environment, look through
<filename>~/.nix-profile</filename> to make sure everything is there that
we wanted. Discerning readers will note that some files are missing. Look
inside <filename>~/.nix-profile/share/man/man1/</filename> to verify this.
There are no man pages for any of the Nix tools! This is because some
packages like Nix have multiple outputs for things like documentation (see
section 4). Let's make Nix install those as well.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" "/bin" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
This provides us with some useful documentation for using our packages.
However, if we actually want those manpages to be detected by man, we need
to set up our environment. This can also be managed within Nix expressions.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
For this to work fully, you must also have this script sourced when you are
logged in. Try adding something like this to your
<filename>~/.profile</filename> file:
</para>
<screen>
#!/bin/sh
if [ -d $HOME/.nix-profile/etc/profile.d ]; then
for i in $HOME/.nix-profile/etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
fi
</screen>
<para>
Now just run <literal>source $HOME/.profile</literal> and you can starting
loading man pages from your environent.
</para>
</section>
<section xml:id="sec-gnu-info-setup">
<title>GNU info setup</title>
<para>
Configuring GNU info is a little bit trickier than man pages. To work
correctly, info needs a database to be generated. This can be done with
some small modifications to our environment scripts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
export INFOPATH=$HOME/.nix-profile/share/info:/nix/var/nix/profiles/default/share/info:/usr/share/info
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
texinfoInteractive
];
pathsToLink = [ "/share/man" "/share/doc" "/share/info" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" "info" ];
postBuild = ''
if [ -x $out/bin/install-info -a -w $out/share/info ]; then
shopt -s nullglob
for i in $out/share/info/*.info $out/share/info/*.info.gz; do
$out/bin/install-info $i $out/share/info/dir
done
fi
'';
};
};
}
</screen>
<para>
<literal>postBuild</literal> tells Nixpkgs to run a command after building
the environment. In this case, <literal>install-info</literal> adds the
installed info pages to <literal>dir</literal> which is GNU info's default
root node. Note that <literal>texinfoInteractive</literal> is added to the
environment to give the <literal>install-info</literal> command.
</para>
</section>
</section>
</chapter>

View File

@ -1,35 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-contributing">
<title>Contributing to this documentation</title>
<para>
The DocBook sources of the Nixpkgs manual are in the
<filename
xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename>
subdirectory of the Nixpkgs repository.
</para>
<para>
You can quickly check your edits with <command>make</command>:
</para>
<screen>
<prompt>$ </prompt>cd /path/to/nixpkgs/doc
<prompt>$ </prompt>nix-shell
<prompt>[nix-shell]$ </prompt>make
</screen>
<para>
If you experience problems, run <command>make debug</command> to help
understand the docbook errors.
</para>
<para>
After making modifications to the manual, it's important to build it before
committing. You can do that as follows:
<screen>
<prompt>$ </prompt>cd /path/to/nixpkgs/doc
<prompt>$ </prompt>nix-shell
<prompt>[nix-shell]$ </prompt>make clean
<prompt>[nix-shell]$ </prompt>nix-build .
</screen>
If the build succeeds, the manual will be in
<filename>./result/share/doc/nixpkgs/manual.html</filename>.
</para>
</chapter>

View File

@ -0,0 +1,924 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-conventions">
<title>Coding conventions</title>
<section xml:id="sec-syntax">
<title>Syntax</title>
<itemizedlist>
<listitem>
<para>
Use 2 spaces of indentation per indentation level in Nix expressions, 4 spaces in shell scripts.
</para>
</listitem>
<listitem>
<para>
Do not use tab characters, i.e. configure your editor to use soft tabs. For instance, use <literal>(setq-default indent-tabs-mode nil)</literal> in Emacs. Everybody has different tab settings so its asking for trouble.
</para>
</listitem>
<listitem>
<para>
Use <literal>lowerCamelCase</literal> for variable names, not <literal>UpperCamelCase</literal>. Note, this rule does not apply to package attribute names, which instead follow the rules in <xref linkend="sec-package-naming"/>.
</para>
</listitem>
<listitem>
<para>
Function calls with attribute set arguments are written as
<programlisting>
foo {
arg = ...;
}
</programlisting>
not
<programlisting>
foo
{
arg = ...;
}
</programlisting>
Also fine is
<programlisting>
foo { arg = ...; }
</programlisting>
if it's a short call.
</para>
</listitem>
<listitem>
<para>
In attribute sets or lists that span multiple lines, the attribute names or list elements should be aligned:
<programlisting>
# A long list.
list = [
elem1
elem2
elem3
];
# A long attribute set.
attrs = {
attr1 = short_expr;
attr2 =
if true then big_expr else big_expr;
};
# Combined
listOfAttrs = [
{
attr1 = 3;
attr2 = "fff";
}
{
attr1 = 5;
attr2 = "ggg";
}
];
</programlisting>
</para>
</listitem>
<listitem>
<para>
Short lists or attribute sets can be written on one line:
<programlisting>
# A short list.
list = [ elem1 elem2 elem3 ];
# A short set.
attrs = { x = 1280; y = 1024; };
</programlisting>
</para>
</listitem>
<listitem>
<para>
Breaking in the middle of a function argument can give hard-to-read code, like
<programlisting>
someFunction { x = 1280;
y = 1024; } otherArg
yetAnotherArg
</programlisting>
(especially if the argument is very large, spanning multiple lines).
</para>
<para>
Better:
<programlisting>
someFunction
{ x = 1280; y = 1024; }
otherArg
yetAnotherArg
</programlisting>
or
<programlisting>
let res = { x = 1280; y = 1024; };
in someFunction res otherArg yetAnotherArg
</programlisting>
</para>
</listitem>
<listitem>
<para>
The bodies of functions, asserts, and withs are not indented to prevent a lot of superfluous indentation levels, i.e.
<programlisting>
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
</programlisting>
not
<programlisting>
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
</programlisting>
</para>
</listitem>
<listitem>
<para>
Function formal arguments are written as:
<programlisting>
{ arg1, arg2, arg3 }:
</programlisting>
but if they don't fit on one line they're written as:
<programlisting>
{ arg1, arg2, arg3
, arg4, ...
, # Some comment...
argN
}:
</programlisting>
</para>
</listitem>
<listitem>
<para>
Functions should list their expected arguments as precisely as possible. That is, write
<programlisting>
{ stdenv, fetchurl, perl }: <replaceable>...</replaceable>
</programlisting>
instead of
<programlisting>
args: with args; <replaceable>...</replaceable>
</programlisting>
or
<programlisting>
{ stdenv, fetchurl, perl, ... }: <replaceable>...</replaceable>
</programlisting>
</para>
<para>
For functions that are truly generic in the number of arguments (such as wrappers around <varname>mkDerivation</varname>) that have some required arguments, you should write them using an <literal>@</literal>-pattern:
<programlisting>
{ stdenv, doCoverageAnalysis ? false, ... } @ args:
stdenv.mkDerivation (args // {
<replaceable>...</replaceable> if doCoverageAnalysis then "bla" else "" <replaceable>...</replaceable>
})
</programlisting>
instead of
<programlisting>
args:
args.stdenv.mkDerivation (args // {
<replaceable>...</replaceable> if args ? doCoverageAnalysis &amp;&amp; args.doCoverageAnalysis then "bla" else "" <replaceable>...</replaceable>
})
</programlisting>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-package-naming">
<title>Package naming</title>
<para>
The key words <emphasis>must</emphasis>, <emphasis>must not</emphasis>, <emphasis>required</emphasis>, <emphasis>shall</emphasis>, <emphasis>shall not</emphasis>, <emphasis>should</emphasis>, <emphasis>should not</emphasis>, <emphasis>recommended</emphasis>, <emphasis>may</emphasis>, and <emphasis>optional</emphasis> in this section are to be interpreted as described in <link xlink:href="https://tools.ietf.org/html/rfc2119">RFC 2119</link>. Only <emphasis>emphasized</emphasis> words are to be interpreted in this way.
</para>
<para>
In Nixpkgs, there are generally three different names associated with a package:
<itemizedlist>
<listitem>
<para>
The <varname>name</varname> attribute of the derivation (excluding the version part). This is what most users see, in particular when using <command>nix-env</command>.
</para>
</listitem>
<listitem>
<para>
The variable name used for the instantiated package in <filename>all-packages.nix</filename>, and when passing it as a dependency to other functions. Typically this is called the <emphasis>package attribute name</emphasis>. This is what Nix expression authors see. It can also be used when installing using <command>nix-env -iA</command>.
</para>
</listitem>
<listitem>
<para>
The filename for (the directory containing) the Nix expression.
</para>
</listitem>
</itemizedlist>
Most of the time, these are the same. For instance, the package <literal>e2fsprogs</literal> has a <varname>name</varname> attribute <literal>"e2fsprogs-<replaceable>version</replaceable>"</literal>, is bound to the variable name <varname>e2fsprogs</varname> in <filename>all-packages.nix</filename>, and the Nix expression is in <filename>pkgs/os-specific/linux/e2fsprogs/default.nix</filename>.
</para>
<para>
There are a few naming guidelines:
<itemizedlist>
<listitem>
<para>
The <literal>name</literal> attribute <emphasis>should</emphasis> be identical to the upstream package name.
</para>
</listitem>
<listitem>
<para>
The <literal>name</literal> attribute <emphasis>must not</emphasis> contain uppercase letters — e.g., <literal>"mplayer-1.0rc2"</literal> instead of <literal>"MPlayer-1.0rc2"</literal>.
</para>
</listitem>
<listitem>
<para>
The version part of the <literal>name</literal> attribute <emphasis>must</emphasis> start with a digit (following a dash) — e.g., <literal>"hello-0.3.1rc2"</literal>.
</para>
</listitem>
<listitem>
<para>
If a package is not a release but a commit from a repository, then the version part of the name <emphasis>must</emphasis> be the date of that (fetched) commit. The date <emphasis>must</emphasis> be in <literal>"YYYY-MM-DD"</literal> format. Also append <literal>"unstable"</literal> to the name - e.g., <literal>"pkgname-unstable-2014-09-23"</literal>.
</para>
</listitem>
<listitem>
<para>
Dashes in the package name <emphasis>should</emphasis> be preserved in new variable names, rather than converted to underscores or camel cased — e.g., <varname>http-parser</varname> instead of <varname>http_parser</varname> or <varname>httpParser</varname>. The hyphenated style is preferred in all three package names.
</para>
</listitem>
<listitem>
<para>
If there are multiple versions of a package, this <emphasis>should</emphasis> be reflected in the variable names in <filename>all-packages.nix</filename>, e.g. <varname>json-c-0-9</varname> and <varname>json-c-0-11</varname>. If there is an obvious “default” version, make an attribute like <literal>json-c = json-c-0-9;</literal>. See also <xref linkend="sec-versioning" />
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="sec-organisation">
<title>File naming and organisation</title>
<para>
Names of files and directories should be in lowercase, with dashes between words — not in camel case. For instance, it should be <filename>all-packages.nix</filename>, not <filename>allPackages.nix</filename> or <filename>AllPackages.nix</filename>.
</para>
<section xml:id="sec-hierarchy">
<title>Hierarchy</title>
<para>
Each package should be stored in its own directory somewhere in the <filename>pkgs/</filename> tree, i.e. in <filename>pkgs/<replaceable>category</replaceable>/<replaceable>subcategory</replaceable>/<replaceable>...</replaceable>/<replaceable>pkgname</replaceable></filename>. Below are some rules for picking the right category for a package. Many packages fall under several categories; what matters is the <emphasis>primary</emphasis> purpose of a package. For example, the <literal>libxml2</literal> package builds both a library and some tools; but its a library foremost, so it goes under <filename>pkgs/development/libraries</filename>.
</para>
<para>
When in doubt, consider refactoring the <filename>pkgs/</filename> tree, e.g. creating new categories or splitting up an existing category.
</para>
<variablelist>
<varlistentry>
<term>
If its used to support <emphasis>software development</emphasis>:
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its a <emphasis>library</emphasis> used by other packages:
</term>
<listitem>
<para>
<filename>development/libraries</filename> (e.g. <filename>libxml2</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>compiler</emphasis>:
</term>
<listitem>
<para>
<filename>development/compilers</filename> (e.g. <filename>gcc</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an <emphasis>interpreter</emphasis>:
</term>
<listitem>
<para>
<filename>development/interpreters</filename> (e.g. <filename>guile</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a (set of) development <emphasis>tool(s)</emphasis>:
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its a <emphasis>parser generator</emphasis> (including lexers):
</term>
<listitem>
<para>
<filename>development/tools/parsing</filename> (e.g. <filename>bison</filename>, <filename>flex</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>build manager</emphasis>:
</term>
<listitem>
<para>
<filename>development/tools/build-managers</filename> (e.g. <filename>gnumake</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>development/tools/misc</filename> (e.g. <filename>binutils</filename>)
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>development/misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a (set of) <emphasis>tool(s)</emphasis>:
</term>
<listitem>
<para>
(A tool is a relatively small program, especially one intended to be used non-interactively.)
</para>
<variablelist>
<varlistentry>
<term>
If its for <emphasis>networking</emphasis>:
</term>
<listitem>
<para>
<filename>tools/networking</filename> (e.g. <filename>wget</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its for <emphasis>text processing</emphasis>:
</term>
<listitem>
<para>
<filename>tools/text</filename> (e.g. <filename>diffutils</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>system utility</emphasis>, i.e., something related or essential to the operation of a system:
</term>
<listitem>
<para>
<filename>tools/system</filename> (e.g. <filename>cron</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an <emphasis>archiver</emphasis> (which may include a compression function):
</term>
<listitem>
<para>
<filename>tools/archivers</filename> (e.g. <filename>zip</filename>, <filename>tar</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>compression</emphasis> program:
</term>
<listitem>
<para>
<filename>tools/compression</filename> (e.g. <filename>gzip</filename>, <filename>bzip2</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>security</emphasis>-related program:
</term>
<listitem>
<para>
<filename>tools/security</filename> (e.g. <filename>nmap</filename>, <filename>gnupg</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>tools/misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>shell</emphasis>:
</term>
<listitem>
<para>
<filename>shells</filename> (e.g. <filename>bash</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>server</emphasis>:
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its a web server:
</term>
<listitem>
<para>
<filename>servers/http</filename> (e.g. <filename>apache-httpd</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an implementation of the X Windowing System:
</term>
<listitem>
<para>
<filename>servers/x11</filename> (e.g. <filename>xorg</filename> — this includes the client libraries and programs)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>servers/misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>desktop environment</emphasis>:
</term>
<listitem>
<para>
<filename>desktops</filename> (e.g. <filename>kde</filename>, <filename>gnome</filename>, <filename>enlightenment</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>window manager</emphasis>:
</term>
<listitem>
<para>
<filename>applications/window-managers</filename> (e.g. <filename>awesome</filename>, <filename>stumpwm</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an <emphasis>application</emphasis>:
</term>
<listitem>
<para>
A (typically large) program with a distinct user interface, primarily used interactively.
</para>
<variablelist>
<varlistentry>
<term>
If its a <emphasis>version management system</emphasis>:
</term>
<listitem>
<para>
<filename>applications/version-management</filename> (e.g. <filename>subversion</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its for <emphasis>video playback / editing</emphasis>:
</term>
<listitem>
<para>
<filename>applications/video</filename> (e.g. <filename>vlc</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its for <emphasis>graphics viewing / editing</emphasis>:
</term>
<listitem>
<para>
<filename>applications/graphics</filename> (e.g. <filename>gimp</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its for <emphasis>networking</emphasis>:
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its a <emphasis>mailreader</emphasis>:
</term>
<listitem>
<para>
<filename>applications/networking/mailreaders</filename> (e.g. <filename>thunderbird</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>newsreader</emphasis>:
</term>
<listitem>
<para>
<filename>applications/networking/newsreaders</filename> (e.g. <filename>pan</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>web browser</emphasis>:
</term>
<listitem>
<para>
<filename>applications/networking/browsers</filename> (e.g. <filename>firefox</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>applications/networking/misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>applications/misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its <emphasis>data</emphasis> (i.e., does not have a straight-forward executable semantics):
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its a <emphasis>font</emphasis>:
</term>
<listitem>
<para>
<filename>data/fonts</filename>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an <emphasis>icon theme</emphasis>:
</term>
<listitem>
<para>
<filename>data/icons</filename>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its related to <emphasis>SGML/XML processing</emphasis>:
</term>
<listitem>
<variablelist>
<varlistentry>
<term>
If its an <emphasis>XML DTD</emphasis>:
</term>
<listitem>
<para>
<filename>data/sgml+xml/schemas/xml-dtd</filename> (e.g. <filename>docbook</filename>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its an <emphasis>XSLT stylesheet</emphasis>:
</term>
<listitem>
<para>
(Okay, these are executable...)
</para>
<para>
<filename>data/sgml+xml/stylesheets/xslt</filename> (e.g. <filename>docbook-xsl</filename>)
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>theme</emphasis> for a <emphasis>desktop environment</emphasis>,
a <emphasis>window manager</emphasis> or a <emphasis>display manager</emphasis>:
</term>
<listitem>
<para>
<filename>data/themes</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</varlistentry>
<varlistentry>
<term>
If its a <emphasis>game</emphasis>:
</term>
<listitem>
<para>
<filename>games</filename>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Else:
</term>
<listitem>
<para>
<filename>misc</filename>
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="sec-versioning">
<title>Versioning</title>
<para>
Because every version of a package in Nixpkgs creates a potential maintenance burden, old versions of a package should not be kept unless there is a good reason to do so. For instance, Nixpkgs contains several versions of GCC because other packages dont build with the latest version of GCC. Other examples are having both the latest stable and latest pre-release version of a package, or to keep several major releases of an application that differ significantly in functionality.
</para>
<para>
If there is only one version of a package, its Nix expression should be named <filename>e2fsprogs/default.nix</filename>. If there are multiple versions, this should be reflected in the filename, e.g. <filename>e2fsprogs/1.41.8.nix</filename> and <filename>e2fsprogs/1.41.9.nix</filename>. The version in the filename should leave out unnecessary detail. For instance, if we keep the latest Firefox 2.0.x and 3.5.x versions in Nixpkgs, they should be named <filename>firefox/2.0.nix</filename> and <filename>firefox/3.5.nix</filename>, respectively (which, at a given point, might contain versions <literal>2.0.0.20</literal> and <literal>3.5.4</literal>). If a version requires many auxiliary files, you can use a subdirectory for each version, e.g. <filename>firefox/2.0/default.nix</filename> and <filename>firefox/3.5/default.nix</filename>.
</para>
<para>
All versions of a package <emphasis>must</emphasis> be included in <filename>all-packages.nix</filename> to make sure that they evaluate correctly.
</para>
</section>
</section>
<section xml:id="sec-sources">
<title>Fetching Sources</title>
<para>
There are multiple ways to fetch a package source in nixpkgs. The general guideline is that you should package reproducible sources with a high degree of availability. Right now there is only one fetcher which has mirroring support and that is <literal>fetchurl</literal>. Note that you should also prefer protocols which have a corresponding proxy environment variable.
</para>
<para>
You can find many source fetch helpers in <literal>pkgs/build-support/fetch*</literal>.
</para>
<para>
In the file <literal>pkgs/top-level/all-packages.nix</literal> you can find fetch helpers, these have names on the form <literal>fetchFrom*</literal>. The intention of these are to provide snapshot fetches but using the same api as some of the version controlled fetchers from <literal>pkgs/build-support/</literal>. As an example going from bad to good:
<itemizedlist>
<listitem>
<para>
Bad: Uses <literal>git://</literal> which won't be proxied.
<programlisting>
src = fetchgit {
url = "git://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
</programlisting>
</para>
</listitem>
<listitem>
<para>
Better: This is ok, but an archive fetch will still be faster.
<programlisting>
src = fetchgit {
url = "https://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
</programlisting>
</para>
</listitem>
<listitem>
<para>
Best: Fetches a snapshot archive and you get the rev you want.
<programlisting>
src = fetchFromGitHub {
owner = "NixOS";
repo = "nix";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1i2yxndxb6yc9l6c99pypbd92lfq5aac4klq7y2v93c9qvx2cgpc";
}
</programlisting>
Find the value to put as <literal>sha256</literal> by running <literal>nix run -f '&lt;nixpkgs&gt;' nix-prefetch-github -c nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix</literal> or <literal>nix-prefetch-url --unpack https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz</literal>.
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="sec-source-hashes">
<title>Obtaining source hash</title>
<para>
Preferred source hash type is sha256. There are several ways to get it.
</para>
<orderedlist>
<listitem>
<para>
Prefetch URL (with <literal>nix-prefetch-<replaceable>XXX</replaceable> <replaceable>URL</replaceable></literal>, where <replaceable>XXX</replaceable> is one of <literal>url</literal>, <literal>git</literal>, <literal>hg</literal>, <literal>cvs</literal>, <literal>bzr</literal>, <literal>svn</literal>). Hash is printed to stdout.
</para>
</listitem>
<listitem>
<para>
Prefetch by package source (with <literal>nix-prefetch-url '&lt;nixpkgs&gt;' -A <replaceable>PACKAGE</replaceable>.src</literal>, where <replaceable>PACKAGE</replaceable> is package attribute name). Hash is printed to stdout.
</para>
<para>
This works well when you've upgraded existing package version and want to find out new hash, but is useless if package can't be accessed by attribute or package has multiple sources (<literal>.srcs</literal>, architecture-dependent sources, etc).
</para>
</listitem>
<listitem>
<para>
Upstream provided hash: use it when upstream provides <literal>sha256</literal> or <literal>sha512</literal> (when upstream provides <literal>md5</literal>, don't use it, compute <literal>sha256</literal> instead).
</para>
<para>
A little nuance is that <literal>nix-prefetch-*</literal> tools produce hash encoded with <literal>base32</literal>, but upstream usually provides hexadecimal (<literal>base16</literal>) encoding. Fetchers understand both formats. Nixpkgs does not standardize on any one format.
</para>
<para>
You can convert between formats with nix-hash, for example:
<screen>
<prompt>$ </prompt>nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable>
</screen>
</para>
</listitem>
<listitem>
<para>
Extracting hash from local source tarball can be done with <literal>sha256sum</literal>. Use <literal>nix-prefetch-url file:///path/to/tarball </literal> if you want base32 hash.
</para>
</listitem>
<listitem>
<para>
Fake hash: set fake hash in package expression, perform build and extract correct hash from error Nix prints.
</para>
<para>
For package updates it is enough to change one symbol to make hash fake. For new packages, you can use <literal>lib.fakeSha256</literal>, <literal>lib.fakeSha512</literal> or any other fake hash.
</para>
<para>
This is last resort method when reconstructing source URL is non-trivial and <literal>nix-prefetch-url -A</literal> isn't applicable (for example, <link xlink:href="https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73"> one of <literal>kodi</literal> dependencies</link>). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
</para>
<warning>
<para>
This method has security problems. Check below for details.
</para>
</warning>
</listitem>
</orderedlist>
<section xml:id="sec-source-hashes-security">
<title>Obtaining hashes securely</title>
<para>
Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead of fetching source you can fetch malware, and instead of source hash you get hash of malware. Here are security considerations for this scenario:
</para>
<itemizedlist>
<listitem>
<para>
<literal>http://</literal> URLs are not secure to prefetch hash from;
</para>
</listitem>
<listitem>
<para>
hashes from upstream (in method 3) should be obtained via secure protocol;
</para>
</listitem>
<listitem>
<para>
<literal>https://</literal> URLs are secure in methods 1, 2, 3;
</para>
</listitem>
<listitem>
<para>
<literal>https://</literal> URLs are not secure in method 5. When obtaining hashes with fake hash method, TLS checks are disabled. So refetch source hash from several different networks to exclude MITM scenario. Alternatively, use fake hash method to make Nix error, but instead of extracting hash from error, extract <literal>https://</literal> URL and prefetch it with method 1.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="sec-patches">
<title>Patches</title>
<para>
Patches available online should be retrieved using <literal>fetchpatch</literal>.
</para>
<para>
<programlisting>
patches = [
(fetchpatch {
name = "fix-check-for-using-shared-freetype-lib.patch";
url = "http://git.ghostscript.com/?p=ghostpdl.git;a=patch;h=8f5d285";
sha256 = "1f0k043rng7f0rfl9hhb89qzvvksqmkrikmm38p61yfx51l325xr";
})
];
</programlisting>
</para>
<para>
Otherwise, you can add a <literal>.patch</literal> file to the <literal>nixpkgs</literal> repository. In the interest of keeping our maintenance burden to a minimum, only patches that are unique to <literal>nixpkgs</literal> should be added in this way.
</para>
<para>
<programlisting>
patches = [ ./0001-changes.patch ];
</programlisting>
</para>
<para>
If you do need to do create this sort of patch file, one way to do so is with git:
<orderedlist>
<listitem>
<para>
Move to the root directory of the source code you're patching.
<screen>
<prompt>$ </prompt>cd the/program/source</screen>
</para>
</listitem>
<listitem>
<para>
If a git repository is not already present, create one and stage all of the source files.
<screen>
<prompt>$ </prompt>git init
<prompt>$ </prompt>git add .</screen>
</para>
</listitem>
<listitem>
<para>
Edit some files to make whatever changes need to be included in the patch.
</para>
</listitem>
<listitem>
<para>
Use git to create a diff, and pipe the output to a patch file:
<screen>
<prompt>$ </prompt>git diff > nixpkgs/pkgs/the/package/0001-changes.patch</screen>
</para>
</listitem>
</orderedlist>
</para>
</section>
</chapter>

View File

@ -0,0 +1,30 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-contributing">
<title>Contributing to this documentation</title>
<para>
The DocBook sources of the Nixpkgs manual are in the <filename
xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename> subdirectory of the Nixpkgs repository.
</para>
<para>
You can quickly check your edits with <command>make</command>:
</para>
<screen>
<prompt>$ </prompt>cd /path/to/nixpkgs/doc
<prompt>$ </prompt>nix-shell
<prompt>[nix-shell]$ </prompt>make
</screen>
<para>
If you experience problems, run <command>make debug</command> to help understand the docbook errors.
</para>
<para>
After making modifications to the manual, it's important to build it before committing. You can do that as follows:
<screen>
<prompt>$ </prompt>cd /path/to/nixpkgs/doc
<prompt>$ </prompt>nix-shell
<prompt>[nix-shell]$ </prompt>make clean
<prompt>[nix-shell]$ </prompt>nix-build .
</screen>
If the build succeeds, the manual will be in <filename>./result/share/doc/nixpkgs/manual.html</filename>.
</para>
</chapter>

View File

@ -0,0 +1,152 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-quick-start">
<title>Quick Start to Adding a Package</title>
<para>
To add a package to Nixpkgs:
<orderedlist>
<listitem>
<para>
Checkout the Nixpkgs source tree:
<screen>
<prompt>$ </prompt>git clone https://github.com/NixOS/nixpkgs
<prompt>$ </prompt>cd nixpkgs</screen>
</para>
</listitem>
<listitem>
<para>
Find a good place in the Nixpkgs tree to add the Nix expression for your package. For instance, a library package typically goes into <filename>pkgs/development/libraries/<replaceable>pkgname</replaceable></filename>, while a web browser goes into <filename>pkgs/applications/networking/browsers/<replaceable>pkgname</replaceable></filename>. See <xref linkend="sec-organisation" /> for some hints on the tree organisation. Create a directory for your package, e.g.
<screen>
<prompt>$ </prompt>mkdir pkgs/development/libraries/libfoo</screen>
</para>
</listitem>
<listitem>
<para>
In the package directory, create a Nix expression — a piece of code that describes how to build the package. In this case, it should be a <emphasis>function</emphasis> that is called with the package dependencies as arguments, and returns a build of the package in the Nix store. The expression should usually be called <filename>default.nix</filename>.
<screen>
<prompt>$ </prompt>emacs pkgs/development/libraries/libfoo/default.nix
<prompt>$ </prompt>git add pkgs/development/libraries/libfoo/default.nix</screen>
</para>
<para>
You can have a look at the existing Nix expressions under <filename>pkgs/</filename> to see how its done. Here are some good ones:
<itemizedlist>
<listitem>
<para>
GNU Hello: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/hello/default.nix"><filename>pkgs/applications/misc/hello/default.nix</filename></link>. Trivial package, which specifies some <varname>meta</varname> attributes which is good practice.
</para>
</listitem>
<listitem>
<para>
GNU cpio: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/archivers/cpio/default.nix"><filename>pkgs/tools/archivers/cpio/default.nix</filename></link>. Also a simple package. The generic builder in <varname>stdenv</varname> does everything for you. It has no dependencies beyond <varname>stdenv</varname>.
</para>
</listitem>
<listitem>
<para>
GNU Multiple Precision arithmetic library (GMP): <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/gmp/5.1.x.nix"><filename>pkgs/development/libraries/gmp/5.1.x.nix</filename></link>. Also done by the generic builder, but has a dependency on <varname>m4</varname>.
</para>
</listitem>
<listitem>
<para>
Pan, a GTK-based newsreader: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/newsreaders/pan/default.nix"><filename>pkgs/applications/networking/newsreaders/pan/default.nix</filename></link>. Has an optional dependency on <varname>gtkspell</varname>, which is only built if <varname>spellCheck</varname> is <literal>true</literal>.
</para>
</listitem>
<listitem>
<para>
Apache HTTPD: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/http/apache-httpd/2.4.nix"><filename>pkgs/servers/http/apache-httpd/2.4.nix</filename></link>. A bunch of optional features, variable substitutions in the configure flags, a post-install hook, and miscellaneous hackery.
</para>
</listitem>
<listitem>
<para>
Thunderbird: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/mailreaders/thunderbird/default.nix"><filename>pkgs/applications/networking/mailreaders/thunderbird/default.nix</filename></link>. Lots of dependencies.
</para>
</listitem>
<listitem>
<para>
JDiskReport, a Java utility: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/jdiskreport/default.nix"><filename>pkgs/tools/misc/jdiskreport/default.nix</filename></link>. Nixpkgs doesnt have a decent <varname>stdenv</varname> for Java yet so this is pretty ad-hoc.
</para>
</listitem>
<listitem>
<para>
XML::Simple, a Perl module: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link> (search for the <varname>XMLSimple</varname> attribute). Most Perl modules are so simple to build that they are defined directly in <filename>perl-packages.nix</filename>; no need to make a separate file for them.
</para>
</listitem>
<listitem>
<para>
Adobe Reader: <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/adobe-reader/default.nix"><filename>pkgs/applications/misc/adobe-reader/default.nix</filename></link>. Shows how binary-only packages can be supported. In particular the <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/adobe-reader/builder.sh">builder</link> uses <command>patchelf</command> to set the RUNPATH and ELF interpreter of the executables so that the right libraries are found at runtime.
</para>
</listitem>
</itemizedlist>
</para>
<para>
Some notes:
<itemizedlist>
<listitem>
<para>
All <varname linkend="chap-meta">meta</varname> attributes are optional, but its still a good idea to provide at least the <varname>description</varname>, <varname>homepage</varname> and <varname
linkend="sec-meta-license">license</varname>.
</para>
</listitem>
<listitem>
<para>
You can use <command>nix-prefetch-url</command> <replaceable>url</replaceable> to get the SHA-256 hash of source distributions. There are similar commands as <command>nix-prefetch-git</command> and <command>nix-prefetch-hg</command> available in <literal>nix-prefetch-scripts</literal> package.
</para>
</listitem>
<listitem>
<para>
A list of schemes for <literal>mirror://</literal> URLs can be found in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/fetchurl/mirrors.nix"><filename>pkgs/build-support/fetchurl/mirrors.nix</filename></link>.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The exact syntax and semantics of the Nix expression language, including the built-in function, are described in the Nix manual in the <link
xlink:href="http://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter on writing Nix expressions</link>.
</para>
</listitem>
<listitem>
<para>
Add a call to the function defined in the previous step to <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/all-packages.nix"><filename>pkgs/top-level/all-packages.nix</filename></link> with some descriptive name for the variable, e.g. <varname>libfoo</varname>.
<screen>
<prompt>$ </prompt>emacs pkgs/top-level/all-packages.nix</screen>
</para>
<para>
The attributes in that file are sorted by category (like “Development / Libraries”) that more-or-less correspond to the directory structure of Nixpkgs, and then by attribute name.
</para>
</listitem>
<listitem>
<para>
To test whether the package builds, run the following command from the root of the nixpkgs source tree:
<screen>
<prompt>$ </prompt>nix-build -A libfoo</screen>
where <varname>libfoo</varname> should be the variable name defined in the previous step. You may want to add the flag <option>-K</option> to keep the temporary build directory in case something fails. If the build succeeds, a symlink <filename>./result</filename> to the package in the Nix store is created.
</para>
</listitem>
<listitem>
<para>
If you want to install the package into your profile (optional), do
<screen>
<prompt>$ </prompt>nix-env -f . -iA libfoo</screen>
</para>
</listitem>
<listitem>
<para>
Optionally commit the new package and open a pull request <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls">to nixpkgs</link>, or use <link
xlink:href="https://discourse.nixos.org/t/about-the-patches-category/477"> the Patches category</link> on Discourse for sending a patch without a GitHub account.
</para>
</listitem>
</orderedlist>
</para>
</chapter>

View File

@ -0,0 +1,536 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="chap-reviewing-contributions">
<title>Reviewing contributions</title>
<warning>
<para>
The following section is a draft, and the policy for reviewing is still being discussed in issues such as <link
xlink:href="https://github.com/NixOS/nixpkgs/issues/11166">#11166 </link> and <link
xlink:href="https://github.com/NixOS/nixpkgs/issues/20836">#20836 </link>.
</para>
</warning>
<para>
The Nixpkgs project receives a fairly high number of contributions via GitHub pull requests. Reviewing and approving these is an important task and a way to contribute to the project.
</para>
<para>
The high change rate of Nixpkgs makes any pull request that remains open for too long subject to conflicts that will require extra work from the submitter or the merger. Reviewing pull requests in a timely manner and being responsive to the comments is the key to avoid this issue. GitHub provides sort filters that can be used to see the <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most recently</link> and the <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-asc">least recently</link> updated pull requests. We highly encourage looking at <link xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+review%3Anone+status%3Asuccess+-label%3A%222.status%3A+work-in-progress%22+no%3Aproject+no%3Aassignee+no%3Amilestone"> this list of ready to merge, unreviewed pull requests</link>.
</para>
<para>
When reviewing a pull request, please always be nice and polite. Controversial changes can lead to controversial opinions, but it is important to respect every community member and their work.
</para>
<para>
GitHub provides reactions as a simple and quick way to provide feedback to pull requests or any comments. The thumb-down reaction should be used with care and if possible accompanied with some explanation so the submitter has directions to improve their contribution.
</para>
<para>
pull request reviews should include a list of what has been reviewed in a comment, so other reviewers and mergers can know the state of the review.
</para>
<para>
All the review template samples provided in this section are generic and meant as examples. Their usage is optional and the reviewer is free to adapt them to their liking.
</para>
<section xml:id="reviewing-contributions-package-updates">
<title>Package updates</title>
<para>
A package update is the most trivial and common type of pull request. These pull requests mainly consist of updating the version part of the package name and the source hash.
</para>
<para>
It can happen that non-trivial updates include patches or more complex changes.
</para>
<para>
Reviewing process:
</para>
<itemizedlist>
<listitem>
<para>
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: package (update)</literal> and any topic label that fit the updated package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the package versioning fits the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the commit text fits the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the package maintainers are notified.
</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link> will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the meta field information is correct.
</para>
<itemizedlist>
<listitem>
<para>
License can change with version updates, so it should be checked to match the upstream license.
</para>
</listitem>
<listitem>
<para>
If the package has no maintainer, a maintainer must be set. This can be the update submitter or a community member that accepts to take maintainership of the package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the code contains no typos.
</para>
</listitem>
<listitem>
<para>
Building the package locally.
</para>
<itemizedlist>
<listitem>
<para>
pull requests are often targeted to the master or staging branch, and building the pull request locally when it is submitted can trigger many source builds.
</para>
<para>
It is possible to rebase the changes on nixos-unstable or nixpkgs-unstable for easier review by running the following commands from a nixpkgs clone.
<screen>
<prompt>$ </prompt>git fetch origin nixos-unstable <co xml:id='reviewing-rebase-2' />
<prompt>$ </prompt>git fetch origin pull/PRNUMBER/head <co xml:id='reviewing-rebase-3' />
<prompt>$ </prompt>git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
xml:id='reviewing-rebase-4' />
</screen>
<calloutlist>
<callout arearefs='reviewing-rebase-2'>
<para>
Fetching the nixos-unstable branch.
</para>
</callout>
<callout arearefs='reviewing-rebase-3'>
<para>
Fetching the pull request changes, <varname>PRNUMBER</varname> is the number at the end of the pull request title and <varname>BASEBRANCH</varname> the base branch of the pull request.
</para>
</callout>
<callout arearefs='reviewing-rebase-4'>
<para>
Rebasing the pull request changes to the nixos-unstable branch.
</para>
</callout>
</calloutlist>
</para>
</listitem>
<listitem>
<para>
The <link xlink:href="https://github.com/Mic92/nixpkgs-review">nixpkgs-review</link> tool can be used to review a pull request content in a single command. <varname>PRNUMBER</varname> should be replaced by the number at the end of the pull request title. You can also provide the full github pull request url.
</para>
<screen>
<prompt>$ </prompt>nix-shell -p nixpkgs-review --run "nixpkgs-review pr PRNUMBER"
</screen>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Running every binary.
</para>
</listitem>
</itemizedlist>
<example xml:id="reviewing-contributions-sample-package-update">
<title>Sample template for a package update review</title>
<screen>
##### Reviewed points
- [ ] package name fits guidelines
- [ ] package version fits guidelines
- [ ] package build on ARCHITECTURE
- [ ] executables tested on ARCHITECTURE
- [ ] all depending packages build
##### Possible improvements
##### Comments
</screen>
</example>
</section>
<section xml:id="reviewing-contributions-new-packages">
<title>New packages</title>
<para>
New packages are a common type of pull requests. These pull requests consists in adding a new nix-expression for a package.
</para>
<para>
Reviewing process:
</para>
<itemizedlist>
<listitem>
<para>
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: package (new)</literal> and any topic label that fit the new package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the package versioning is fitting the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the commit name is fitting the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the meta field contains correct information.
</para>
<itemizedlist>
<listitem>
<para>
License must be checked to be fitting upstream license.
</para>
</listitem>
<listitem>
<para>
Platforms should be set or the package will not get binary substitutes.
</para>
</listitem>
<listitem>
<para>
A maintainer must be set. This can be the package submitter or a community member that accepts to take maintainership of the package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the code contains no typos.
</para>
</listitem>
<listitem>
<para>
Ensure the package source.
</para>
<itemizedlist>
<listitem>
<para>
Mirrors urls should be used when available.
</para>
</listitem>
<listitem>
<para>
The most appropriate function should be used (e.g. packages from GitHub should use <literal>fetchFromGitHub</literal>).
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Building the package locally.
</para>
</listitem>
<listitem>
<para>
Running every binary.
</para>
</listitem>
</itemizedlist>
<example xml:id="reviewing-contributions-sample-new-package">
<title>Sample template for a new package review</title>
<screen>
##### Reviewed points
- [ ] package path fits guidelines
- [ ] package name fits guidelines
- [ ] package version fits guidelines
- [ ] package build on ARCHITECTURE
- [ ] executables tested on ARCHITECTURE
- [ ] `meta.description` is set and fits guidelines
- [ ] `meta.license` fits upstream license
- [ ] `meta.platforms` is set
- [ ] `meta.maintainers` is set
- [ ] build time only dependencies are declared in `nativeBuildInputs`
- [ ] source is fetched using the appropriate function
- [ ] phases are respected
- [ ] patches that are remotely available are fetched with `fetchpatch`
##### Possible improvements
##### Comments
</screen>
</example>
</section>
<section xml:id="reviewing-contributions-module-updates">
<title>Module updates</title>
<para>
Module updates are submissions changing modules in some ways. These often contains changes to the options or introduce new options.
</para>
<para>
Reviewing process
</para>
<itemizedlist>
<listitem>
<para>
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: module (update)</literal> and any topic label that fit the module.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the module maintainers are notified.
</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link> will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the module tests, if any, are succeeding.
</para>
</listitem>
<listitem>
<para>
Ensure that the introduced options are correct.
</para>
<itemizedlist>
<listitem>
<para>
Type should be appropriate (string related types differs in their merging capabilities, <literal>optionSet</literal> and <literal>string</literal> types are deprecated).
</para>
</listitem>
<listitem>
<para>
Description, default and example should be provided.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that option changes are backward compatible.
</para>
<itemizedlist>
<listitem>
<para>
<literal>mkRenamedOptionModule</literal> and <literal>mkAliasOptionModule</literal> functions provide way to make option changes backward compatible.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that removed options are declared with <literal>mkRemovedOptionModule</literal>
</para>
</listitem>
<listitem>
<para>
Ensure that changes that are not backward compatible are mentioned in release notes.
</para>
</listitem>
<listitem>
<para>
Ensure that documentations affected by the change is updated.
</para>
</listitem>
</itemizedlist>
<example xml:id="reviewing-contributions-sample-module-update">
<title>Sample template for a module update review</title>
<screen>
##### Reviewed points
- [ ] changes are backward compatible
- [ ] removed options are declared with `mkRemovedOptionModule`
- [ ] changes that are not backward compatible are documented in release notes
- [ ] module tests succeed on ARCHITECTURE
- [ ] options types are appropriate
- [ ] options description is set
- [ ] options example is provided
- [ ] documentation affected by the changes is updated
##### Possible improvements
##### Comments
</screen>
</example>
</section>
<section xml:id="reviewing-contributions-new-modules">
<title>New modules</title>
<para>
New modules submissions introduce a new module to NixOS.
</para>
<itemizedlist>
<listitem>
<para>
Add labels to the pull request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: module (new)</literal> and any topic label that fit the module.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the module tests, if any, are succeeding.
</para>
</listitem>
<listitem>
<para>
Ensure that the introduced options are correct.
</para>
<itemizedlist>
<listitem>
<para>
Type should be appropriate (string related types differs in their merging capabilities, <literal>optionSet</literal> and <literal>string</literal> types are deprecated).
</para>
</listitem>
<listitem>
<para>
Description, default and example should be provided.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that module <literal>meta</literal> field is present
</para>
<itemizedlist>
<listitem>
<para>
Maintainers should be declared in <literal>meta.maintainers</literal>.
</para>
</listitem>
<listitem>
<para>
Module documentation should be declared with <literal>meta.doc</literal>.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Ensure that the module respect other modules functionality.
</para>
<itemizedlist>
<listitem>
<para>
For example, enabling a module should not open firewall ports by default.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<example xml:id="reviewing-contributions-sample-new-module">
<title>Sample template for a new module review</title>
<screen>
##### Reviewed points
- [ ] module path fits the guidelines
- [ ] module tests succeed on ARCHITECTURE
- [ ] options have appropriate types
- [ ] options have default
- [ ] options have example
- [ ] options have descriptions
- [ ] No unneeded package is added to environment.systemPackages
- [ ] meta.maintainers is set
- [ ] module documentation is declared in meta.doc
##### Possible improvements
##### Comments
</screen>
</example>
</section>
<section xml:id="reviewing-contributions-other-submissions">
<title>Other submissions</title>
<para>
Other type of submissions requires different reviewing steps.
</para>
<para>
If you consider having enough knowledge and experience in a topic and would like to be a long-term reviewer for related submissions, please contact the current reviewers for that topic. They will give you information about the reviewing process. The main reviewers for a topic can be hard to find as there is no list, but checking past pull requests to see who reviewed or git-blaming the code to see who committed to that topic can give some hints.
</para>
<para>
Container system, boot system and library changes are some examples of the pull requests fitting this category.
</para>
</section>
<section xml:id="reviewing-contributions--merging-pull-requests">
<title>Merging pull requests</title>
<para>
It is possible for community members that have enough knowledge and experience on a special topic to contribute by merging pull requests.
</para>
<para>
TODO: add the procedure to request merging rights.
</para>
<!--
The following paragraph about how to deal with unactive contributors is just a
proposition and should be modified to what the community agrees to be the right
policy.
<para>Please note that contributors with commit rights unactive for more than
three months will have their commit rights revoked.</para>
-->
<para>
In a case a contributor definitively leaves the Nix community, they should create an issue or post on <link
xlink:href="https://discourse.nixos.org">Discourse</link> with references of packages and modules they maintain so the maintainership can be taken over by other contributors.
</para>
</section>
</chapter>

View File

@ -0,0 +1,431 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-submitting-changes">
<title>Submitting changes</title>
<section xml:id="submitting-changes-making-patches">
<title>Making patches</title>
<itemizedlist>
<listitem>
<para>
Read <link xlink:href="https://nixos.org/nixpkgs/manual/">Manual (How to write packages for Nix)</link>.
</para>
</listitem>
<listitem>
<para>
Fork <link xlink:href="https://github.com/nixos/nixpkgs/">the Nixpkgs repository</link> on GitHub.
</para>
</listitem>
<listitem>
<para>
Create a branch for your future fix.
<itemizedlist>
<listitem>
<para>
You can make branch from a commit of your local <command>nixos-version</command>. That will help you to avoid additional local compilations. Because you will receive packages from binary cache. For example
<screen>
<prompt>$ </prompt>nixos-version --hash
0998212
<prompt>$ </prompt>git checkout 0998212
<prompt>$ </prompt>git checkout -b 'fix/pkg-name-update'
</screen>
</para>
</listitem>
<listitem>
<para>
Please avoid working directly on the <command>master</command> branch.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
Make commits of logical units.
</para>
</listitem>
<listitem>
<para>
If you removed pkgs or made some major NixOS changes, write about it in the release notes for the next stable release. For example <command>nixos/doc/manual/release-notes/rl-2003.xml</command>.
</para>
</listitem>
<listitem>
<para>
Check for unnecessary whitespace with <command>git diff --check</command> before committing.
</para>
</listitem>
<listitem>
<para>
Format the commit in a following way:
</para>
<programlisting>
(pkg-name | nixos/&lt;module>): (from -> to | init at version | refactor | etc)
Additional information.
</programlisting>
<itemizedlist>
<listitem>
<para>
Examples:
<itemizedlist>
<listitem>
<para>
<command>nginx: init at 2.0.1</command>
</para>
</listitem>
<listitem>
<para>
<command>firefox: 54.0.1 -> 55.0</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/hydra: add bazBaz option</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/nginx: refactor config generation</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Test your changes. If you work with
<itemizedlist>
<listitem>
<para>
nixpkgs:
<itemizedlist>
<listitem>
<para>
update pkg ->
<itemizedlist>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
add pkg ->
<itemizedlist>
<listitem>
<para>
Make sure it's in <command>pkgs/top-level/all-packages.nix</command>
</para>
</listitem>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
<emphasis>If you don't want to install pkg in you profile</emphasis>.
<itemizedlist>
<listitem>
<para>
<command>nix-build -A pkg-attribute-name &lt;path to your local nixpkgs folder&gt;/default.nix</command> and check results in the folder <command>result</command>. It will appear in the same directory where you did <command>nix-build</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
If you did <command>nix-env -i pkg-name</command> you can do <command>nix-env -e pkg-name</command> to uninstall it from your system.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
NixOS and its modules:
<itemizedlist>
<listitem>
<para>
You can add new module to your NixOS configuration file (usually it's <command>/etc/nixos/configuration.nix</command>). And do <command>sudo nixos-rebuild test -I nixpkgs=&lt;path to your local nixpkgs folder&gt; --fast</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
If you have commits <command>pkg-name: oh, forgot to insert whitespace</command>: squash commits in this case. Use <command>git rebase -i</command>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://git-scm.com/book/en/v2/Git-Branching-Rebasing">Rebase</link> your branch against current <command>master</command>.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="submitting-changes-submitting-changes">
<title>Submitting changes</title>
<itemizedlist>
<listitem>
<para>
Push your changes to your fork of nixpkgs.
</para>
</listitem>
<listitem>
<para>
Create the pull request
</para>
</listitem>
<listitem>
<para>
Follow <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md#submitting-changes">the contribution guidelines</link>.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="submitting-changes-submitting-security-fixes">
<title>Submitting security fixes</title>
<para>
Security fixes are submitted in the same way as other changes and thus the same guidelines apply.
</para>
<para>
If the security fix comes in the form of a patch and a CVE is available, then the name of the patch should be the CVE identifier, so e.g. <literal>CVE-2019-13636.patch</literal> in the case of a patch that is included in the Nixpkgs tree. If a patch is fetched the name needs to be set as well, e.g.:
</para>
<programlisting>
(fetchpatch {
name = "CVE-2019-11068.patch";
url = "https://gitlab.gnome.org/GNOME/libxslt/commit/e03553605b45c88f0b4b2980adfbbb8f6fca2fd6.patch";
sha256 = "0pkpb4837km15zgg6h57bncp66d5lwrlvkr73h0lanywq7zrwhj8";
})
</programlisting>
<para>
If a security fix applies to both master and a stable release then, similar to regular changes, they are preferably delivered via master first and cherry-picked to the release branch.
</para>
<para>
Critical security fixes may by-pass the staging branches and be delivered directly to release branches such as <literal>master</literal> and <literal>release-*</literal>.
</para>
</section>
<section xml:id="submitting-changes-pull-request-template">
<title>Pull Request Template</title>
<para>
The pull request template helps determine what steps have been made for a contribution so far, and will help guide maintainers on the status of a change. The motivation section of the PR should include any extra details the title does not address and link any existing issues related to the pull request.
</para>
<para>
When a PR is created, it will be pre-populated with some checkboxes detailed below:
</para>
<section xml:id="submitting-changes-tested-with-sandbox">
<title>Tested using sandboxing</title>
<para>
When sandbox builds are enabled, Nix will setup an isolated environment for each build process. It is used to remove further hidden dependencies set by the build environment to improve reproducibility. This includes access to the network during the build outside of <function>fetch*</function> functions and files outside the Nix store. Depending on the operating system access to other resources are blocked as well (ex. inter process communication is isolated on Linux); see <link
xlink:href="https://nixos.org/nix/manual/#conf-sandbox">sandbox</link> in Nix manual for details.
</para>
<para>
Sandboxing is not enabled by default in Nix due to a small performance hit on each build. In pull requests for <link
xlink:href="https://github.com/NixOS/nixpkgs/">nixpkgs</link> people are asked to test builds with sandboxing enabled (see <literal>Tested using sandboxing</literal> in the pull request template) because in<link
xlink:href="https://nixos.org/hydra/">https://nixos.org/hydra/</link> sandboxing is also used.
</para>
<para>
Depending if you use NixOS or other platforms you can use one of the following methods to enable sandboxing <emphasis role="bold">before</emphasis> building the package:
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Globally enable sandboxing on NixOS</emphasis>: add the following to <filename>configuration.nix</filename>
<screen>nix.useSandbox = true;</screen>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Globally enable sandboxing on non-NixOS platforms</emphasis>: add the following to: <filename>/etc/nix/nix.conf</filename>
<screen>sandbox = true</screen>
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="submitting-changes-platform-diversity">
<title>Built on platform(s)</title>
<para>
Many Nix packages are designed to run on multiple platforms. As such, it's important to let the maintainer know which platforms your changes have been tested on. It's not always practical to test a change on all platforms, and is not required for a pull request to be merged. Only check the systems you tested the build on in this section.
</para>
</section>
<section xml:id="submitting-changes-nixos-tests">
<title>Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)</title>
<para>
Packages with automated tests are much more likely to be merged in a timely fashion because it doesn't require as much manual testing by the maintainer to verify the functionality of the package. If there are existing tests for the package, they should be run to verify your changes do not break the tests. Tests only apply to packages with NixOS modules defined and can only be run on Linux. For more details on writing and running tests, see the <link
xlink:href="https://nixos.org/nixos/manual/index.html#sec-nixos-tests">section in the NixOS manual</link>.
</para>
</section>
<section xml:id="submitting-changes-tested-compilation">
<title>Tested compilation of all pkgs that depend on this change using <command>nixpkgs-review</command></title>
<para>
If you are updating a package's version, you can use nixpkgs-review to make sure all packages that depend on the updated package still compile correctly. The <command>nixpkgs-review</command> utility can look for and build all dependencies either based on uncommited changes with the <literal>wip</literal> option or specifying a github pull request number.
</para>
<para>
review changes from pull request number 12345:
<screen>nix run nixpkgs.nixpkgs-review -c nixpkgs-review pr 12345</screen>
</para>
<para>
review uncommitted changes:
<screen>nix run nixpkgs.nixpkgs-review -c nixpkgs-review wip</screen>
</para>
<para>
review changes from last commit:
<screen>nix run nixpkgs.nixpkgs-review -c nixpkgs-review rev HEAD</screen>
</para>
</section>
<section xml:id="submitting-changes-tested-execution">
<title>Tested execution of all binary files (usually in <filename>./result/bin/</filename>)</title>
<para>
It's important to test any executables generated by a build when you change or create a package in nixpkgs. This can be done by looking in <filename>./result/bin</filename> and running any files in there, or at a minimum, the main executable for the package. For example, if you make a change to <package>texlive</package>, you probably would only check the binaries associated with the change you made rather than testing all of them.
</para>
</section>
<section xml:id="submitting-changes-contribution-standards">
<title>Meets Nixpkgs contribution standards</title>
<para>
The last checkbox is fits <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md">CONTRIBUTING.md</link>. The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
</para>
</section>
</section>
<section xml:id="submitting-changes-hotfixing-pull-requests">
<title>Hotfixing pull requests</title>
<itemizedlist>
<listitem>
<para>
Make the appropriate changes in you branch.
</para>
</listitem>
<listitem>
<para>
Don't create additional commits, do
<itemizedlist>
<listitem>
<para>
<command>git rebase -i</command>
</para>
</listitem>
<listitem>
<para>
<command>git push --force</command> to your branch.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="submitting-changes-commit-policy">
<title>Commit policy</title>
<itemizedlist>
<listitem>
<para>
Commits must be sufficiently tested before being merged, both for the master and staging branches.
</para>
</listitem>
<listitem>
<para>
Hydra builds for master and staging should not be used as testing platform, it's a build farm for changes that have been already tested.
</para>
</listitem>
<listitem>
<para>
When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people's installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from @edolstra.
</para>
</listitem>
</itemizedlist>
<section xml:id="submitting-changes-master-branch">
<title>Master branch</title>
<para>
The <literal>master</literal> branch is the main development branch.
It should only see non-breaking commits that do not cause mass rebuilds.
</para>
</section>
<section xml:id="submitting-changes-staging-branch">
<title>Staging branch</title>
<para>
The <literal>staging</literal> branch is a development branch where mass-rebuilds go.
It should only see non-breaking mass-rebuild commits.
That means it is not to be used for testing, and changes must have been well tested already.
If the branch is already in a broken state, please refrain from adding extra new breakages.
</para>
</section>
<section xml:id="submitting-changes-staging-next-branch">
<title>Staging-next branch</title>
<para>
The <literal>staging-next</literal> branch is for stabilizing mass-rebuilds submitted to the <literal>staging</literal> branch prior to merging them into <literal>master</literal>.
Mass-rebuilds should go via the <literal>staging</literal> branch.
It should only see non-breaking commits that are fixing issues blocking it from being merged into the <literal>master </literal> branch.
</para>
<para>
If the branch is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days and then merge into master.
</para>
</section>
<section xml:id="submitting-changes-stable-release-branches">
<title>Stable release branches</title>
<itemizedlist>
<listitem>
<para>
If you're cherry-picking a commit to a stable release branch (“backporting”), always use <command>git cherry-pick -xe</command> and ensure the message contains a clear description about why this needs to be included in the stable branch.
</para>
<para>
An example of a cherry-picked commit would look like this:
</para>
<screen>
nixos: Refactor the world.
The original commit message describing the reason why the world was torn apart.
(cherry picked from commit abcdef)
Reason: I just had a gut feeling that this would also be wanted by people from
the stone age.
</screen>
</listitem>
</itemizedlist>
</section>
</section>
</chapter>

View File

@ -1,678 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-cross">
<title>Cross-compilation</title>
<section xml:id="sec-cross-intro">
<title>Introduction</title>
<para>
"Cross-compilation" means compiling a program on one machine for another
type of machine. For example, a typical use of cross-compilation is to
compile programs for embedded devices. These devices often don't have the
computing power and memory to compile their own programs. One might think
that cross-compilation is a fairly niche concern. However, there are
significant advantages to rigorously distinguishing between build-time and
run-time environments! Significant, because the benefits apply even when one
is developing and deploying on the same machine. Nixpkgs is increasingly
adopting the opinion that packages should be written with cross-compilation
in mind, and nixpkgs should evaluate in a similar way (by minimizing
cross-compilation-specific special cases) whether or not one is
cross-compiling.
</para>
<para>
This chapter will be organized in three parts. First, it will describe the
basics of how to package software in a way that supports cross-compilation.
Second, it will describe how to use Nixpkgs when cross-compiling. Third, it
will describe the internal infrastructure supporting cross-compilation.
</para>
</section>
<!--============================================================-->
<section xml:id="sec-cross-packaging">
<title>Packaging in a cross-friendly manner</title>
<section xml:id="ssec-cross-platform-parameters">
<title>Platform parameters</title>
<para>
Nixpkgs follows the
<link
xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">conventions
of GNU autoconf</link>. We distinguish between 3 types of platforms when
building a derivation: <wordasword>build</wordasword>,
<wordasword>host</wordasword>, and <wordasword>target</wordasword>. In
summary, <wordasword>build</wordasword> is the platform on which a package
is being built, <wordasword>host</wordasword> is the platform on which it
will run. The third attribute, <wordasword>target</wordasword>, is relevant
only for certain specific compilers and build tools.
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the
names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>,
and <literal>targetPlatform</literal>. They are always defined as
attributes in the standard environment. That means one can access them
like:
<programlisting>{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform...</programlisting>
.
</para>
<variablelist>
<varlistentry>
<term>
<varname>buildPlatform</varname>
</term>
<listitem>
<para>
The "build platform" is the platform on which a package is built. Once
someone has a built package, or pre-built binary package, the build
platform should not matter and can be ignored.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>hostPlatform</varname>
</term>
<listitem>
<para>
The "host platform" is the platform on which a package will be run. This
is the simplest platform to understand, but also the one with the worst
name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>targetPlatform</varname>
</term>
<listitem>
<para>
The "target platform" attribute is, unlike the other two attributes, not
actually fundamental to the process of building software. Instead, it is
only relevant for compatibility with building certain specific compilers
and build tools. It can be safely ignored for all other packages.
</para>
<para>
The build process of certain compilers is written in such a way that the
compiler resulting from a single build can itself only produce binaries
for a single platform. The task of specifying this single "target
platform" is thus pushed to build time of the compiler. The root cause
of this is that the compiler (which will be run on the host) and the
standard library/runtime (which will be run on the target) are built by
a single build process.
</para>
<para>
There is no fundamental need to think about a single target ahead of
time like this. If the tool supports modular or pluggable backends, both
the need to specify the target at build time and the constraint of
having only a single target disappear. An example of such a tool is
LLVM.
</para>
<para>
Although the existence of a "target platfom" is arguably a historical
mistake, it is a common one: examples of tools that suffer from it are
GCC, Binutils, GHC and Autoconf. Nixpkgs tries to avoid sharing in the
mistake where possible. Still, because the concept of a target platform
is so ingrained, it is best to support it as is.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
The exact schema these fields follow is a bit ill-defined due to a long and
convoluted evolution, but this is slowly being cleaned up. You can see
examples of ones used in practice in
<literal>lib.systems.examples</literal>; note how they are not all very
consistent. For now, here are few fields can count on them containing:
</para>
<variablelist>
<varlistentry>
<term>
<varname>system</varname>
</term>
<listitem>
<para>
This is a two-component shorthand for the platform. Examples of this
would be "x86_64-darwin" and "i686-linux"; see
<literal>lib.systems.doubles</literal> for more. The first component
corresponds to the CPU architecture of the platform and the second to
the operating system of the platform (<literal>[cpu]-[os]</literal>).
This format has built-in support in Nix, such as the
<varname>builtins.currentSystem</varname> impure string.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>config</varname>
</term>
<listitem>
<para>
This is a 3- or 4- component shorthand for the platform. Examples of
this would be <literal>x86_64-unknown-linux-gnu</literal> and
<literal>aarch64-apple-darwin14</literal>. This is a standard format
called the "LLVM target triple", as they are pioneered by LLVM. In the
4-part form, this corresponds to
<literal>[cpu]-[vendor]-[os]-[abi]</literal>. This format is strictly
more informative than the "Nix host double", as the previous format
could analogously be termed. This needs a better name than
<varname>config</varname>!
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>parsed</varname>
</term>
<listitem>
<para>
This is a Nix representation of a parsed LLVM target triple with
white-listed components. This can be specified directly, or actually
parsed from the <varname>config</varname>. See
<literal>lib.systems.parse</literal> for the exact representation.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>libc</varname>
</term>
<listitem>
<para>
This is a string identifying the standard C library used. Valid
identifiers include "glibc" for GNU libc, "libSystem" for Darwin's
Libsystem, and "uclibc" for µClibc. It should probably be refactored to
use the module system, like <varname>parse</varname>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>is*</varname>
</term>
<listitem>
<para>
These predicates are defined in <literal>lib.systems.inspect</literal>,
and slapped onto every platform. They are superior to the ones in
<varname>stdenv</varname> as they force the user to be explicit about
which platform they are inspecting. Please use these instead of those.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>platform</varname>
</term>
<listitem>
<para>
This is, quite frankly, a dumping ground of ad-hoc settings (it's an
attribute set). See <literal>lib.systems.platforms</literal> for
examples—there's hopefully one in there that will work verbatim for
each platform that is working. Please help us triage these flags and
give them better homes!
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="ssec-cross-dependency-categorization">
<title>Theory of dependency categorization</title>
<note>
<para>
This is a rather philosophical description that isn't very
Nixpkgs-specific. For an overview of all the relevant attributes given to
<varname>mkDerivation</varname>, see
<xref
linkend="ssec-stdenv-dependencies"/>. For a description of how
everything is implemented, see
<xref linkend="ssec-cross-dependency-implementation" />.
</para>
</note>
<para>
In this section we explore the relationship between both runtime and
build-time dependencies and the 3 Autoconf platforms.
</para>
<para>
A run time dependency between two packages requires that their host
platforms match. This is directly implied by the meaning of "host platform"
and "runtime dependency": The package dependency exists while both packages
are running on a single host platform.
</para>
<para>
A build time dependency, however, has a shift in platforms between the
depending package and the depended-on package. "build time dependency"
means that to build the depending package we need to be able to run the
depended-on's package. The depending package's build platform is therefore
equal to the depended-on package's host platform.
</para>
<para>
If both the dependency and depending packages aren't compilers or other
machine-code-producing tools, we're done. And indeed
<varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
have covered these simpler build-time and run-time (respectively) changes
for many years. But if the dependency does produce machine code, we might
need to worry about its target platform too. In principle, that target
platform might be any of the depending package's build, host, or target
platforms, but we prohibit dependencies from a "later" platform to an
earlier platform to limit confusion because we've never seen a legitimate
use for them.
</para>
<para>
Finally, if the depending package is a compiler or other
machine-code-producing tool, it might need dependencies that run at "emit
time". This is for compilers that (regrettably) insist on being built
together with their source langauges' standard libraries. Assuming build !=
host != target, a run-time dependency of the standard library cannot be run
at the compiler's build time or run time, but only at the run time of code
emitted by the compiler.
</para>
<para>
Putting this all together, that means we have dependencies in the form
"host → target", in at most the following six combinations:
<table>
<caption>Possible dependency types</caption>
<thead>
<tr>
<th>Dependency's host platform</th>
<th>Dependency's target platform</th>
</tr>
</thead>
<tbody>
<tr>
<td>build</td>
<td>build</td>
</tr>
<tr>
<td>build</td>
<td>host</td>
</tr>
<tr>
<td>build</td>
<td>target</td>
</tr>
<tr>
<td>host</td>
<td>host</td>
</tr>
<tr>
<td>host</td>
<td>target</td>
</tr>
<tr>
<td>target</td>
<td>target</td>
</tr>
</tbody>
</table>
</para>
<para>
Some examples will make this table clearer. Suppose there's some package
that is being built with a <literal>(build, host, target)</literal>
platform triple of <literal>(foo, bar, baz)</literal>. If it has a
build-time library dependency, that would be a "host → build" dependency
with a triple of <literal>(foo, foo, *)</literal> (the target platform is
irrelevant). If it needs a compiler to be built, that would be a "build →
host" dependency with a triple of <literal>(foo, foo, *)</literal> (the
target platform is irrelevant). That compiler, would be built with another
compiler, also "build → host" dependency, with a triple of <literal>(foo,
foo, foo)</literal>.
</para>
</section>
<section xml:id="ssec-cross-cookbook">
<title>Cross packaging cookbook</title>
<para>
Some frequently encountered problems when packaging for cross-compilation
should be answered here. Ideally, the information above is exhaustive, so
this section cannot provide any new information, but it is ludicrous and
cruel to expect everyone to spend effort working through the interaction of
many features just to figure out the same answer to the same common
problem. Feel free to add to this list!
</para>
<qandaset>
<qandaentry xml:id="cross-qa-build-c-program-in-build-environment">
<question>
<para>
What if my package's build system needs to build a C program to be run
under the build environment?
</para>
</question>
<answer>
<para>
<programlisting>depsBuildBuild = [ buildPackages.stdenv.cc ];</programlisting>
Add it to your <function>mkDerivation</function> invocation.
</para>
</answer>
</qandaentry>
<qandaentry xml:id="cross-qa-fails-to-find-ar">
<question>
<para>
My package fails to find <command>ar</command>.
</para>
</question>
<answer>
<para>
Many packages assume that an unprefixed <command>ar</command> is
available, but Nix doesn't provide one. It only provides a prefixed one,
just as it only does for all the other binutils programs. It may be
necessary to patch the package to fix the build system to use a prefixed
`ar`.
</para>
</answer>
</qandaentry>
<qandaentry xml:id="cross-testsuite-runs-host-code">
<question>
<para>
My package's testsuite needs to run host platform code.
</para>
</question>
<answer>
<para>
<programlisting>doCheck = stdenv.hostPlatform != stdenv.buildPlatfrom;</programlisting>
Add it to your <function>mkDerivation</function> invocation.
</para>
</answer>
</qandaentry>
</qandaset>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-cross-usage">
<title>Cross-building packages</title>
<para>
Nixpkgs can be instantiated with <varname>localSystem</varname> alone, in
which case there is no cross-compiling and everything is built by and for
that system, or also with <varname>crossSystem</varname>, in which case
packages run on the latter, but all building happens on the former. Both
parameters take the same schema as the 3 (build, host, and target) platforms
defined in the previous section. As mentioned above,
<literal>lib.systems.examples</literal> has some platforms which are used as
arguments for these parameters in practice. You can use them
programmatically, or on the command line:
<programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem '(import &lt;nixpkgs/lib&gt;).systems.examples.fooBarBaz' -A whatever</programlisting>
</para>
<note>
<para>
Eventually we would like to make these platform examples an unnecessary
convenience so that
<programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem '{ config = "&lt;arch&gt;-&lt;os&gt;-&lt;vendor&gt;-&lt;abi&gt;"; }' -A whatever</programlisting>
works in the vast majority of cases. The problem today is dependencies on
other sorts of configuration which aren't given proper defaults. We rely on
the examples to crudely to set those configuration parameters in some
vaguely sane manner on the users behalf. Issue
<link xlink:href="https://github.com/NixOS/nixpkgs/issues/34274">#34274</link>
tracks this inconvenience along with its root cause in crufty configuration
options.
</para>
</note>
<para>
While one is free to pass both parameters in full, there's a lot of logic to
fill in missing fields. As discussed in the previous section, only one of
<varname>system</varname>, <varname>config</varname>, and
<varname>parsed</varname> is needed to infer the other two. Additionally,
<varname>libc</varname> will be inferred from <varname>parse</varname>.
Finally, <literal>localSystem.system</literal> is also
<emphasis>impurely</emphasis> inferred based on the platform evaluation
occurs. This means it is often not necessary to pass
<varname>localSystem</varname> at all, as in the command-line example in the
previous paragraph.
</para>
<note>
<para>
Many sources (manual, wiki, etc) probably mention passing
<varname>system</varname>, <varname>platform</varname>, along with the
optional <varname>crossSystem</varname> to nixpkgs: <literal>import
&lt;nixpkgs&gt; { system = ..; platform = ..; crossSystem = ..;
}</literal>. Passing those two instead of <varname>localSystem</varname> is
still supported for compatibility, but is discouraged. Indeed, much of the
inference we do for these parameters is motivated by compatibility as much
as convenience.
</para>
</note>
<para>
One would think that <varname>localSystem</varname> and
<varname>crossSystem</varname> overlap horribly with the three
<varname>*Platforms</varname> (<varname>buildPlatform</varname>,
<varname>hostPlatform,</varname> and <varname>targetPlatform</varname>; see
<varname>stage.nix</varname> or the manual). Actually, those identifiers are
purposefully not used here to draw a subtle but important distinction: While
the granularity of having 3 platforms is necessary to properly *build*
packages, it is overkill for specifying the user's *intent* when making a
build plan or package set. A simple "build vs deploy" dichotomy is adequate:
the sliding window principle described in the previous section shows how to
interpolate between the these two "end points" to get the 3 platform triple
for each bootstrapping stage. That means for any package a given package
set, even those not bound on the top level but only reachable via
dependencies or <varname>buildPackages</varname>, the three platforms will
be defined as one of <varname>localSystem</varname> or
<varname>crossSystem</varname>, with the former replacing the latter as one
traverses build-time dependencies. A last simple difference is that
<varname>crossSystem</varname> should be null when one doesn't want to
cross-compile, while the <varname>*Platform</varname>s are always non-null.
<varname>localSystem</varname> is always non-null.
</para>
</section>
<!--============================================================-->
<section xml:id="sec-cross-infra">
<title>Cross-compilation infrastructure</title>
<section xml:id="ssec-cross-dependency-implementation">
<title>Implementation of dependencies</title>
<para>
The categorizes of dependencies developed in
<xref
linkend="ssec-cross-dependency-categorization"/> are specified as
lists of derivations given to <varname>mkDerivation</varname>, as
documented in <xref linkend="ssec-stdenv-dependencies"/>. In short,
each list of dependencies for "host → target" of "foo → bar" is called
<varname>depsFooBar</varname>, with exceptions for backwards
compatibility that <varname>depsBuildHost</varname> is instead called
<varname>nativeBuildInputs</varname> and <varname>depsHostTarget</varname>
is instead called <varname>buildInputs</varname>. Nixpkgs is now structured
so that each <varname>depsFooBar</varname> is automatically taken from
<varname>pkgsFooBar</varname>. (These <varname>pkgsFooBar</varname>s are
quite new, so there is no special case for
<varname>nativeBuildInputs</varname> and <varname>buildInputs</varname>.)
For example, <varname>pkgsBuildHost.gcc</varname> should be used at
build-time, while <varname>pkgsHostTarget.gcc</varname> should be used at
run-time.
</para>
<para>
Now, for most of Nixpkgs's history, there were no
<varname>pkgsFooBar</varname> attributes, and most packages have not been
refactored to use it explicitly. Prior to those, there were just
<varname>buildPackages</varname>, <varname>pkgs</varname>, and
<varname>targetPackages</varname>. Those are now redefined as aliases to
<varname>pkgsBuildHost</varname>, <varname>pkgsHostTarget</varname>, and
<varname>pkgsTargetTarget</varname>. It is acceptable, even
recommended, to use them for libraries to show that the host platform is
irrelevant.
</para>
<para>
But before that, there was just <varname>pkgs</varname>, even though both
<varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
existed. [Cross barely worked, and those were implemented with some hacks
on <varname>mkDerivation</varname> to override dependencies.] What this
means is the vast majority of packages do not use any explicit package set
to populate their dependencies, just using whatever
<varname>callPackage</varname> gives them even if they do correctly sort
their dependencies into the multiple lists described above. And indeed,
asking that users both sort their dependencies, <emphasis>and</emphasis>
take them from the right attribute set, is both too onerous and redundant,
so the recommended approach (for now) is to continue just categorizing by
list and not using an explicit package set.
</para>
<para>
To make this work, we "splice" together the six
<varname>pkgsFooBar</varname> package sets and have
<varname>callPackage</varname> actually take its arguments from that. This
is currently implemented in <filename>pkgs/top-level/splice.nix</filename>.
<varname>mkDerivation</varname> then, for each dependency attribute, pulls
the right derivation out from the splice. This splicing can be skipped when
not cross-compiling as the package sets are the same, but still is a bit
slow for cross-compiling. We'd like to do something better, but haven't
come up with anything yet.
</para>
</section>
<section xml:id="ssec-bootstrapping">
<title>Bootstrapping</title>
<para>
Each of the package sets described above come from a single bootstrapping
stage. While <filename>pkgs/top-level/default.nix</filename>, coordinates
the composition of stages at a high level,
<filename>pkgs/top-level/stage.nix</filename> "ties the knot" (creates the
fixed point) of each stage. The package sets are defined per-stage however,
so they can be thought of as edges between stages (the nodes) in a graph.
Compositions like <literal>pkgsBuildTarget.targetPackages</literal> can be
thought of as paths to this graph.
</para>
<para>
While there are many package sets, and thus many edges, the stages can also
be arranged in a linear chain. In other words, many of the edges are
redundant as far as connectivity is concerned. This hinges on the type of
bootstrapping we do. Currently for cross it is:
<orderedlist>
<listitem>
<para>
<literal>(native, native, native)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, native, foreign)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, foreign, foreign)</literal>
</para>
</listitem>
</orderedlist>
In each stage, <varname>pkgsBuildHost</varname> refers the the previous
stage, <varname>pkgsBuildBuild</varname> refers to the one before that, and
<varname>pkgsHostTarget</varname> refers to the current one, and
<varname>pkgsTargetTarget</varname> refers to the next one. When there is
no previous or next stage, they instead refer to the current stage. Note
how all the invariants regarding the mapping between dependency and depending
packages' build host and target platforms are preserved.
<varname>pkgsBuildTarget</varname> and <varname>pkgsHostHost</varname> are
more complex in that the stage fitting the requirements isn't always a
fixed chain of "prevs" and "nexts" away (modulo the "saturating"
self-references at the ends). We just special case each instead. All the primary
edges are implemented is in <filename>pkgs/stdenv/booter.nix</filename>,
and secondarily aliases in <filename>pkgs/top-level/stage.nix</filename>.
</para>
<note>
<para>
Note the native stages are bootstrapped in legacy ways that predate the
current cross implementation. This is why the the bootstrapping stages
leading up to the final stages are ignored inthe previous paragraph.
</para>
</note>
<para>
If one looks at the 3 platform triples, one can see that they overlap such
that one could put them together into a chain like:
<programlisting>
(native, native, native, foreign, foreign)
</programlisting>
If one imagines the saturating self references at the end being replaced
with infinite stages, and then overlays those platform triples, one ends up
with the infinite tuple:
<programlisting>
(native..., native, native, native, foreign, foreign, foreign...)
</programlisting>
On can then imagine any sequence of platforms such that there are bootstrap
stages with their 3 platforms determined by "sliding a window" that is the
3 tuple through the sequence. This was the original model for
bootstrapping. Without a target platform (assume a better world where all
compilers are multi-target and all standard libraries are built in their
own derivation), this is sufficient. Conversely if one wishes to cross
compile "faster", with a "Canadian Cross" bootstraping stage where
<literal>build != host != target</literal>, more bootstrapping stages are
needed since no sliding window providess the pesky
<varname>pkgsBuildTarget</varname> package set since it skips the Canadian
cross stage's "host".
</para>
<note>
<para>
It is much better to refer to <varname>buildPackages</varname> than
<varname>targetPackages</varname>, or more broadly package sets that do
not mention "target". There are three reasons for this.
</para>
<para>
First, it is because bootstrapping stages do not have a unique
<varname>targetPackages</varname>. For example a <literal>(x86-linux,
x86-linux, arm-linux)</literal> and <literal>(x86-linux, x86-linux,
x86-windows)</literal> package set both have a <literal>(x86-linux,
x86-linux, x86-linux)</literal> package set. Because there is no canonical
<varname>targetPackages</varname> for such a native (<literal>build ==
host == target</literal>) package set, we set their
<varname>targetPackages</varname>
</para>
<para>
Second, it is because this is a frequent source of hard-to-follow
"infinite recursions" / cycles. When only package sets that don't mention
target are used, the package set forms a directed acyclic graph. This
means that all cycles that exist are confined to one stage. This means
they are a lot smaller, and easier to follow in the code or a backtrace. It
also means they are present in native and cross builds alike, and so more
likely to be caught by CI and other users.
</para>
<para>
Thirdly, it is because everything target-mentioning only exists to
accommodate compilers with lousy build systems that insist on the compiler
itself and standard library being built together. Of course that is bad
because bigger derivations means longer rebuilds. It is also problematic because
it tends to make the standard libraries less like other libraries than
they could be, complicating code and build systems alike. Because of the
other problems, and because of these innate disadvantages, compilers ought
to be packaged another way where possible.
</para>
</note>
<note>
<para>
If one explores Nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms—the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered
the <literal>host = target</literal>, with build platform the same or not
based on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
</para>
</note>
</section>
</section>
</chapter>

View File

@ -8,7 +8,7 @@
<xsl:param name="html.script" select="'./highlightjs/highlight.pack.js ./highlightjs/loader.js'" />
<xsl:param name="xref.with.number.and.title" select="1" />
<xsl:param name="use.id.as.filename" select="1" />
<xsl:param name="toc.section.depth" select="3" />
<xsl:param name="toc.section.depth" select="0" />
<xsl:param name="admon.style" select="''" />
<xsl:param name="callout.graphics.extension" select="'.svg'" />
</xsl:stylesheet>

View File

@ -4,21 +4,11 @@
xml:id="chap-functions">
<title>Functions reference</title>
<para>
The nixpkgs repository has several utility functions to manipulate Nix
expressions.
The nixpkgs repository has several utility functions to manipulate Nix expressions.
</para>
<xi:include href="functions/library.xml" />
<xi:include href="functions/overrides.xml" />
<xi:include href="functions/generators.xml" />
<xi:include href="functions/debug.xml" />
<xi:include href="functions/fetchers.xml" />
<xi:include href="functions/trivial-builders.xml" />
<xi:include href="functions/fhs-environments.xml" />
<xi:include href="functions/shell.xml" />
<xi:include href="functions/dockertools.xml" />
<xi:include href="functions/snaptools.xml" />
<xi:include href="functions/appimagetools.xml" />
<xi:include href="functions/prefer-remote-fetch.xml" />
<xi:include href="functions/nix-gitignore.xml" />
<xi:include href="functions/ocitools.xml" />
</chapter>

View File

@ -1,118 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-appimageTools">
<title>pkgs.appimageTools</title>
<para>
<varname>pkgs.appimageTools</varname> is a set of functions for extracting
and wrapping <link xlink:href="https://appimage.org/">AppImage</link> files.
They are meant to be used if traditional packaging from source is infeasible,
or it would take too long. To quickly run an AppImage file,
<literal>pkgs.appimage-run</literal> can be used as well.
</para>
<warning>
<para>
The <varname>appimageTools</varname> API is unstable and may be subject to
backwards-incompatible changes in the future.
</para>
</warning>
<section xml:id="ssec-pkgs-appimageTools-formats">
<title>AppImage formats</title>
<para>
There are different formats for AppImages, see
<link xlink:href="https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format">the
specification</link> for details.
</para>
<itemizedlist>
<listitem>
<para>
Type 1 images are ISO 9660 files that are also ELF executables.
</para>
</listitem>
<listitem>
<para>
Type 2 images are ELF executables with an appended filesystem.
</para>
</listitem>
</itemizedlist>
<para>
They can be told apart with <command>file -k</command>:
</para>
<screen>
<prompt>$ </prompt>file -k type1.AppImage
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
<prompt>$ </prompt>file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
</screen>
<para>
Note how the type 1 AppImage is described as an <literal>ISO 9660 CD-ROM
filesystem</literal>, and the type 2 AppImage is not.
</para>
</section>
<section xml:id="ssec-pkgs-appimageTools-wrapping">
<title>Wrapping</title>
<para>
Depending on the type of AppImage you're wrapping, you'll have to use
<varname>wrapType1</varname> or <varname>wrapType2</varname>.
</para>
<programlisting>
appimageTools.wrapType2 { # or wrapType1
name = "patchwork"; <co xml:id='ex-appimageTools-wrapping-1' />
src = fetchurl { <co xml:id='ex-appimageTools-wrapping-2' />
url = https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage;
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ]; <co xml:id='ex-appimageTools-wrapping-3' />
}</programlisting>
<calloutlist>
<callout arearefs='ex-appimageTools-wrapping-1'>
<para>
<varname>name</varname> specifies the name of the resulting image.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>src</varname> specifies the AppImage file to extract.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>extraPkgs</varname> allows you to pass a function to include
additional packages inside the FHS environment your AppImage is going to
run in. There are a few ways to learn which dependencies an application
needs:
<itemizedlist>
<listitem>
<para>
Looking through the extracted AppImage files, reading its scripts and
running <command>patchelf</command> and <command>ldd</command> on its
executables. This can also be done in <command>appimage-run</command>,
by setting <command>APPIMAGE_DEBUG_EXEC=bash</command>.
</para>
</listitem>
<listitem>
<para>
Running <command>strace -vfefile</command> on the wrapped executable,
looking for libraries that can't be found.
</para>
</listitem>
</itemizedlist>
</para>
</callout>
</calloutlist>
</section>
</section>

View File

@ -5,17 +5,10 @@
<title>Debugging Nix Expressions</title>
<para>
Nix is a unityped, dynamic language, this means every value can potentially
appear anywhere. Since it is also non-strict, evaluation order and what
ultimately is evaluated might surprise you. Therefore it is important to be
able to debug nix expressions.
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
</para>
<para>
In the <literal>lib/debug.nix</literal> file you will find a number of
functions that help (pretty-)printing values while evaluation is runnnig. You
can even specify how deep these values should be printed recursively, and
transform them on the fly. Please consult the docstrings in
<literal>lib/debug.nix</literal> for usage information.
In the <literal>lib/debug.nix</literal> file you will find a number of functions that help (pretty-)printing values while evaluation is runnnig. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in <literal>lib/debug.nix</literal> for usage information.
</para>
</section>

View File

@ -1,606 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and
manipulating Docker images according to the
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120">
Docker Image Specification v1.2.0 </link>. Docker itself is not used to
perform any of the operations done by these functions.
</para>
<warning>
<para>
The <varname>dockerTools</varname> API is unstable and may be subject to
backwards-incompatible changes in the future.
</para>
</warning>
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<title>buildImage</title>
<para>
This function is analogous to the <command>docker build</command> command,
in that it can be used to build a Docker-compatible repository tarball
containing a single image with one or multiple layers. As such, the result
is suitable for being loaded in Docker with <command>docker load</command>.
</para>
<para>
The parameters of <varname>buildImage</varname> with relative example values
are described below:
</para>
<example xml:id='ex-dockerTools-buildImage'>
<title>Docker build</title>
<programlisting>
buildImage {
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' />
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' />
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' />
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = { <co xml:id='ex-dockerTools-buildImage-8' />
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
</programlisting>
</example>
<para>
The above example will build a Docker image <literal>redis/latest</literal>
from the given base image. Loading and running this image in Docker results
in <literal>redis-server</literal> being started automatically.
</para>
<calloutlist>
<callout arearefs='ex-dockerTools-buildImage-1'>
<para>
<varname>name</varname> specifies the name of the resulting image. This is
the only required argument for <varname>buildImage</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-2'>
<para>
<varname>tag</varname> specifies the tag of the resulting image. By
default it's <literal>null</literal>, which indicates that the nix output
hash will be used as tag.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-3'>
<para>
<varname>fromImage</varname> is the repository tarball containing the base
image. It must be a valid Docker image, such as exported by
<command>docker save</command>. By default it's <literal>null</literal>,
which can be seen as equivalent to <literal>FROM scratch</literal> of a
<filename>Dockerfile</filename>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-4'>
<para>
<varname>fromImageName</varname> can be used to further specify the base
image within the repository, in case it contains multiple images. By
default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first image available in the
repository.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-5'>
<para>
<varname>fromImageTag</varname> can be used to further specify the tag of
the base image within the repository, in case an image contains multiple
tags. By default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first tag available for the
base image.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-6'>
<para>
<varname>contents</varname> is a derivation that will be copied in the new
layer of the resulting image. This can be similarly seen as <command>ADD
contents/ /</command> in a <filename>Dockerfile</filename>. By default
it's <literal>null</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
<para>
<varname>runAsRoot</varname> is a bash script that will run as root in an
environment that overlays the existing layers of the base image with the
new resulting layer, including the previously copied
<varname>contents</varname> derivation. This can be similarly seen as
<command>RUN ...</command> in a <filename>Dockerfile</filename>.
<note>
<para>
Using this parameter requires the <literal>kvm</literal> device to be
available.
</para>
</note>
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-8'>
<para>
<varname>config</varname> is used to specify the configuration of the
containers that will be started off the built image in Docker. The
available options are listed in the
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions">
Docker Image Specification v1.2.0 </link>.
</para>
</callout>
</calloutlist>
<para>
After the new layer has been created, its closure (to which
<varname>contents</varname>, <varname>config</varname> and
<varname>runAsRoot</varname> contribute) will be copied in the layer itself.
Only new dependencies that are not already in the existing layers will be
copied.
</para>
<para>
At the end of the process, only one new single layer will be produced and
added to the resulting image.
</para>
<para>
The resulting repository will only list the single image
<varname>image/tag</varname>. In the case of
<xref linkend='ex-dockerTools-buildImage'/> it would be
<varname>redis/latest</varname>.
</para>
<para>
It is possible to inspect the arguments with which an image was built using
its <varname>buildArgs</varname> attribute.
</para>
<note>
<para>
If you see errors similar to <literal>getProtocolByName: does not exist (no
such protocol name: tcp)</literal> you may need to add
<literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
</para>
</note>
<note>
<para>
If you see errors similar to <literal>Error_Protocol ("certificate has
unknown CA",True,UnknownCa)</literal> you may need to add
<literal>pkgs.cacert</literal> to <varname>contents</varname>.
</para>
</note>
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date">
<title>Impurely Defining a Docker Layer's Creation Date</title>
<para>
By default <function>buildImage</function> will use a static date of one
second past the UNIX Epoch. This allows <function>buildImage</function> to
produce binary reproducible images. When listing images with
<command>docker images</command>, the newly created images will be listed
like this:
</para>
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
]]></screen>
<para>
You can break binary reproducibility but have a sorted, meaningful
<literal>CREATED</literal> column by setting <literal>created</literal> to
<literal>now</literal>.
</para>
<programlisting><![CDATA[
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
]]></programlisting>
<para>
and now the Docker CLI will display a reasonable date and sort the images
as expected:
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
]]></screen>
however, the produced images will not be binary reproducible.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage">
<title>buildLayeredImage</title>
<para>
Create a Docker image with many of the store paths being on their own layer
to improve sharing between images.
</para>
<variablelist>
<varlistentry>
<term>
<varname>name</varname>
</term>
<listitem>
<para>
The name of the resulting image.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>tag</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Tag of the generated image.
</para>
<para>
<emphasis>Default:</emphasis> the output path's hash
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>contents</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Top level paths in the container. Either a single derivation, or a list
of derivations.
</para>
<para>
<emphasis>Default:</emphasis> <literal>[]</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>config</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Run-time configuration of the container. A full list of the options are
available at in the
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions">
Docker Image Specification v1.2.0 </link>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>{}</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>created</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Date and time the layers were created. Follows the same
<literal>now</literal> exception supported by
<literal>buildImage</literal>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>maxLayers</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Maximum number of layers to create.
</para>
<para>
<emphasis>Default:</emphasis> <literal>100</literal>
</para>
<para>
<emphasis>Maximum:</emphasis> <literal>125</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>extraCommands</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Shell commands to run while building the final layer, without access
to most of the layer contents. Changes to this layer are "on top"
of all the other layers, so can create additional directories
and files.
</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="dockerTools-buildLayeredImage-arg-contents">
<title>Behavior of <varname>contents</varname> in the final image</title>
<para>
Each path directly listed in <varname>contents</varname> will have a
symlink in the root of the image.
</para>
<para>
For example:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
]]></programlisting>
will create symlinks for all the paths in the <literal>hello</literal>
package:
<screen><![CDATA[
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
]]></screen>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-config">
<title>Automatic inclusion of <varname>config</varname> references</title>
<para>
The closure of <varname>config</varname> is automatically included in the
closure of the final image.
</para>
<para>
This allows you to make very simple Docker images with very little code.
This container will start up and run <command>hello</command>:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
]]></programlisting>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers">
<title>Adjusting <varname>maxLayers</varname></title>
<para>
Increasing the <varname>maxLayers</varname> increases the number of layers
which have a chance to be shared between different images.
</para>
<para>
Modern Docker installations support up to 128 layers, however older
versions support as few as 42.
</para>
<para>
If the produced image will not be extended by other Docker builds, it is
safe to set <varname>maxLayers</varname> to <literal>128</literal>. However
it will be impossible to extend the image further.
</para>
<para>
The first (<literal>maxLayers-2</literal>) most "popular" paths will have
their own individual layers, then layer #<literal>maxLayers-1</literal>
will contain all the remaining "unpopular" paths, and finally layer
#<literal>maxLayers</literal> will contain the Image configuration.
</para>
<para>
Docker's Layers are not inherently ordered, they are content-addressable
and are not explicitly layered until they are composed in to an Image.
</para>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>
<para>
This function is analogous to the <command>docker pull</command> command, in
that it can be used to pull a Docker image from a Docker registry. By
default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used
to pull images.
</para>
<para>
Its parameters are described in the example below:
</para>
<example xml:id='ex-dockerTools-pullImage'>
<title>Docker pull</title>
<programlisting>
pullImage {
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
}
</programlisting>
</example>
<calloutlist>
<callout arearefs='ex-dockerTools-pullImage-1'>
<para>
<varname>imageName</varname> specifies the name of the image to be
downloaded, which can also include the registry namespace (e.g.
<literal>nixos</literal>). This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageDigest</varname> specifies the digest of the image to be
downloaded. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>finalImageName</varname>, if specified, this is the name of the
image to be created. Note it is never used to fetch the image since we
prefer to rely on the immutable digest ID. By default it's equal to
<varname>imageName</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para>
<varname>finalImageTag</varname>, if specified, this is the tag of the
image to be created. Note it is never used to fetch the image since we
prefer to rely on the immutable digest ID. By default it's
<literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-5'>
<para>
<varname>sha256</varname> is the checksum of the whole fetched image. This
argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-6'>
<para>
<varname>os</varname>, if specified, is the operating system of the
fetched image. By default it's <literal>linux</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-7'>
<para>
<varname>arch</varname>, if specified, is the cpu architecture of the
fetched image. By default it's <literal>x86_64</literal>.
</para>
</callout>
</calloutlist>
<para>
<literal>nix-prefetch-docker</literal> command can be used to get required
image parameters:
<screen>
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
</screen>
Since a given <varname>imageName</varname> may transparently refer to a
manifest list of images which support multiple architectures and/or
operating systems, you can supply the <option>--os</option> and
<option>--arch</option> arguments to specify exactly which image you want.
By default it will match the OS and architecture of the host the command is
run on.
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
</screen>
Desired image name and tag can be set using
<option>--final-image-name</option> and <option>--final-image-tag</option>
arguments:
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
</screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<title>exportImage</title>
<para>
This function is analogous to the <command>docker export</command> command,
in that it can be used to flatten a Docker image that contains multiple
layers. It is in fact the result of the merge of all the layers of the
image. As such, the result is suitable for being imported in Docker with
<command>docker import</command>.
</para>
<note>
<para>
Using this function requires the <literal>kvm</literal> device to be
available.
</para>
</note>
<para>
The parameters of <varname>exportImage</varname> are the following:
</para>
<example xml:id='ex-dockerTools-exportImage'>
<title>Docker export</title>
<programlisting>
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
</programlisting>
</example>
<para>
The parameters relative to the base image have the same synopsis as
described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that
<varname>fromImage</varname> is the only required argument in this case.
</para>
<para>
The <varname>name</varname> argument is the name of the derivation output,
which defaults to <varname>fromImage.name</varname>.
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<title>shadowSetup</title>
<para>
This constant string is a helper for setting up the base files for managing
users and groups, only if such files don't exist already. It is suitable for
being used in a <varname>runAsRoot</varname>
<xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like
in the example below:
</para>
<example xml:id='ex-dockerTools-shadowSetup'>
<title>Shadow base files</title>
<programlisting>
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
</programlisting>
</example>
<para>
Creating base files like <literal>/etc/passwd</literal> or
<literal>/etc/login.defs</literal> is necessary for shadow-utils to
manipulate users and groups.
</para>
</section>
</section>

View File

@ -1,194 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-fetchers">
<title>Fetcher functions</title>
<para>
When using Nix, you will frequently need to download source code and other
files from the internet. Nixpkgs comes with a few helper functions that allow
you to fetch fixed-output derivations in a structured way.
</para>
<para>
The two fetcher primitives are <function>fetchurl</function> and
<function>fetchzip</function>. Both of these have two required arguments, a
URL and a hash. The hash is typically <literal>sha256</literal>, although
many more hash algorithms are supported. Nixpkgs contributors are currently
recommended to use <literal>sha256</literal>. This hash will be used by Nix
to identify your source. A typical usage of fetchurl is provided below.
</para>
<programlisting><![CDATA[
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "hello";
src = fetchurl {
url = "http://www.example.org/hello.tar.gz";
sha256 = "1111111111111111111111111111111111111111111111111111";
};
}
]]></programlisting>
<para>
The main difference between <function>fetchurl</function> and
<function>fetchzip</function> is in how they store the contents.
<function>fetchurl</function> will store the unaltered contents of the URL
within the Nix store. <function>fetchzip</function> on the other hand will
decompress the archive for you, making files and directories directly
accessible in the future. <function>fetchzip</function> can only be used with
archives. Despite the name, <function>fetchzip</function> is not limited to
.zip files and can also be used with any tarball.
</para>
<para>
<function>fetchpatch</function> works very similarly to
<function>fetchurl</function> with the same arguments expected. It expects
patch files as a source and and performs normalization on them before
computing the checksum. For example it will remove comments or other unstable
parts that are sometimes added by version control systems and can change over
time.
</para>
<para>
Other fetcher functions allow you to add source code directly from a VCS such
as subversion or git. These are mostly straightforward names based on the
name of the command used with the VCS system. Because they give you a working
repository, they act most like <function>fetchzip</function>.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchsvn</literal>
</term>
<listitem>
<para>
Used with Subversion. Expects <literal>url</literal> to a Subversion
directory, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchgit</literal>
</term>
<listitem>
<para>
Used with Git. Expects <literal>url</literal> to a Git repo,
<literal>rev</literal>, and <literal>sha256</literal>.
<literal>rev</literal> in this case can be full the git commit id (SHA1
hash) or a tag name like <literal>refs/tags/v1.0</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchfossil</literal>
</term>
<listitem>
<para>
Used with Fossil. Expects <literal>url</literal> to a Fossil archive,
<literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchcvs</literal>
</term>
<listitem>
<para>
Used with CVS. Expects <literal>cvsRoot</literal>, <literal>tag</literal>,
and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchhg</literal>
</term>
<listitem>
<para>
Used with Mercurial. Expects <literal>url</literal>,
<literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
A number of fetcher functions wrap part of <function>fetchurl</function> and
<function>fetchzip</function>. They are mainly convenience functions intended
for commonly used destinations of source code in Nixpkgs. These wrapper
fetchers are listed below.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchFromGitHub</literal>
</term>
<listitem>
<para>
<function>fetchFromGitHub</function> expects four arguments.
<literal>owner</literal> is a string corresponding to the GitHub user or
organization that controls this repository. <literal>repo</literal>
corresponds to the name of the software repository. These are located at
the top of every GitHub HTML page as
<literal>owner</literal>/<literal>repo</literal>. <literal>rev</literal>
corresponds to the Git commit hash or tag (e.g <literal>v1.0</literal>)
that will be downloaded from Git. Finally, <literal>sha256</literal>
corresponds to the hash of the extracted directory. Again, other hash
algorithms are also available but <literal>sha256</literal> is currently
preferred.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitLab</literal>
</term>
<listitem>
<para>
This is used with GitLab repositories. The arguments expected are very
similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromBitbucket</literal>
</term>
<listitem>
<para>
This is used with BitBucket repositories. The arguments expected are very
similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromSavannah</literal>
</term>
<listitem>
<para>
This is used with Savannah repositories. The arguments expected are very
similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromRepoOrCz</literal>
</term>
<listitem>
<para>
This is used with repo.or.cz repositories. The arguments expected are very
similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -1,142 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-fhs-environments">
<title>buildFHSUserEnv</title>
<para>
<function>buildFHSUserEnv</function> provides a way to build and run
FHS-compatible lightweight sandboxes. It creates an isolated root with bound
<filename>/nix/store</filename>, so its footprint in terms of disk space
needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions,
games distributed as tarballs, software with integrity checking and/or
external self-updated binaries. It uses Linux namespaces feature to create
temporary lightweight environments which are destroyed after all child
processes exit, without root user rights requirement. Accepted arguments are:
</para>
<variablelist>
<varlistentry>
<term>
<literal>name</literal>
</term>
<listitem>
<para>
Environment name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>targetPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for the main host's architecture (i.e. x86_64 on
x86_64 installations). Along with libraries binaries are also installed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>multiPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for all architectures supported by a host (i.e.
i686 and x86_64 on x86_64 installations). Only libraries are installed by
default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the directory structure.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommandsMulti</literal>
</term>
<listitem>
<para>
Like <literal>extraBuildCommands</literal>, but executed only on multilib
architectures.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraOutputsToInstall</literal>
</term>
<listitem>
<para>
Additional derivation outputs to be linked for both target and
multi-architecture packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraInstallCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the derivation with
runner script.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>runScript</literal>
</term>
<listitem>
<para>
A command that would be executed inside the sandbox and passed all the
command line arguments. It defaults to <literal>bash</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
One can create a simple environment using a <literal>shell.nix</literal> like
that:
</para>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
]]></programlisting>
<para>
Running <literal>nix-shell</literal> would then drop you into a shell with
these libraries and binaries available. You can use this to run closed-source
applications which expect FHS structure without hassles: simply change
<literal>runScript</literal> to the application path, e.g.
<filename>./bin/start.sh</filename> -- relative paths are supported.
</para>
</section>

View File

@ -5,28 +5,15 @@
<title>Generators</title>
<para>
Generators are functions that create file formats from nix data structures,
e.g. for configuration files. There are generators available for:
<literal>INI</literal>, <literal>JSON</literal> and <literal>YAML</literal>
Generators are functions that create file formats from nix data structures, e.g. for configuration files. There are generators available for: <literal>INI</literal>, <literal>JSON</literal> and <literal>YAML</literal>
</para>
<para>
All generators follow a similar call interface: <code>generatorName
configFunctions data</code>, where <literal>configFunctions</literal> is an
attrset of user-defined functions that format nested parts of the content.
They each have common defaults, so often they do not need to be set manually.
An example is <code>mkSectionName ? (name: libStr.escape [ "[" "]" ]
name)</code> from the <literal>INI</literal> generator. It receives the name
of a section and sanitizes it. The default <literal>mkSectionName</literal>
escapes <literal>[</literal> and <literal>]</literal> with a backslash.
All generators follow a similar call interface: <code>generatorName configFunctions data</code>, where <literal>configFunctions</literal> is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is <code>mkSectionName ? (name: libStr.escape [ "[" "]" ] name)</code> from the <literal>INI</literal> generator. It receives the name of a section and sanitizes it. The default <literal>mkSectionName</literal> escapes <literal>[</literal> and <literal>]</literal> with a backslash.
</para>
<para>
Generators can be fine-tuned to produce exactly the file format required by
your application/service. One example is an INI-file format which uses
<literal>: </literal> as separator, the strings
<literal>"yes"</literal>/<literal>"no"</literal> as boolean values and
requires all string values to be quoted:
Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses <literal>: </literal> as separator, the strings <literal>"yes"</literal>/<literal>"no"</literal> as boolean values and requires all string values to be quoted:
</para>
<programlisting>
@ -77,13 +64,11 @@ merge:"diff3"
<note>
<para>
Nix store paths can be converted to strings by enclosing a derivation
attribute like so: <code>"${drv}"</code>.
Nix store paths can be converted to strings by enclosing a derivation attribute like so: <code>"${drv}"</code>.
</para>
</note>
<para>
Detailed documentation for each generator can be found in
<literal>lib/generators.nix</literal>.
Detailed documentation for each generator can be found in <literal>lib/generators.nix</literal>.
</para>
</section>

View File

@ -5,8 +5,7 @@
<title>Nixpkgs Library Functions</title>
<para>
Nixpkgs provides a standard library at <varname>pkgs.lib</varname>, or
through <code>import &lt;nixpkgs/lib&gt;</code>.
Nixpkgs provides a standard library at <varname>pkgs.lib</varname>, or through <code>import &lt;nixpkgs/lib&gt;</code>.
</para>
<xi:include href="./library/asserts.xml" />

View File

@ -27,8 +27,7 @@
</term>
<listitem>
<para>
Condition under which the <varname>msg</varname> should
<emphasis>not</emphasis> be printed.
Condition under which the <varname>msg</varname> should <emphasis>not</emphasis> be printed.
</para>
</listitem>
</varlistentry>
@ -64,9 +63,7 @@ stderr> assert failed
<xi:include href="./locations.xml" xpointer="lib.asserts.assertOneOf" />
<para>
Specialized <function>asserts.assertMsg</function> for checking if
<varname>val</varname> is one of the elements of <varname>xs</varname>.
Useful for checking enums.
Specialized <function>asserts.assertMsg</function> for checking if <varname>val</varname> is one of the elements of <varname>xs</varname>. Useful for checking enums.
</para>
<variablelist>
@ -76,8 +73,7 @@ stderr> assert failed
</term>
<listitem>
<para>
The name of the variable the user entered <varname>val</varname> into,
for inclusion in the error message.
The name of the variable the user entered <varname>val</varname> into, for inclusion in the error message.
</para>
</listitem>
</varlistentry>
@ -87,8 +83,7 @@ stderr> assert failed
</term>
<listitem>
<para>
The value of what the user provided, to be compared against the values in
<varname>xs</varname>.
The value of what the user provided, to be compared against the values in <varname>xs</varname>.
</para>
</listitem>
</varlistentry>

View File

@ -23,8 +23,7 @@
</term>
<listitem>
<para>
A list of strings representing the path through the nested attribute set
<varname>set</varname>.
A list of strings representing the path through the nested attribute set <varname>set</varname>.
</para>
</listitem>
</varlistentry>
@ -34,8 +33,7 @@
</term>
<listitem>
<para>
Default value if <varname>attrPath</varname> does not resolve to an
existing value.
Default value if <varname>attrPath</varname> does not resolve to an existing value.
</para>
</listitem>
</varlistentry>
@ -88,8 +86,7 @@ lib.attrsets.attrByPath [ "a" "b" ] 0 {}
</term>
<listitem>
<para>
A list of strings representing the path through the nested attribute set
<varname>set</varname>.
A list of strings representing the path through the nested attribute set <varname>set</varname>.
</para>
</listitem>
</varlistentry>
@ -125,8 +122,7 @@ lib.attrsets.hasAttrByPath
<xi:include href="./locations.xml" xpointer="lib.attrsets.setAttrByPath" />
<para>
Create a new attribute set with <varname>value</varname> set at the nested
attribute location specified in <varname>attrPath</varname>.
Create a new attribute set with <varname>value</varname> set at the nested attribute location specified in <varname>attrPath</varname>.
</para>
<variablelist>
@ -146,8 +142,7 @@ lib.attrsets.hasAttrByPath
</term>
<listitem>
<para>
The value to set at the location described by
<varname>attrPath</varname>.
The value to set at the location described by <varname>attrPath</varname>.
</para>
</listitem>
</varlistentry>
@ -171,8 +166,7 @@ lib.attrsets.setAttrByPath [ "a" "b" ] 3
<xi:include href="./locations.xml" xpointer="lib.attrsets.getAttrFromPath" />
<para>
Like <xref linkend="function-library-lib.attrsets.attrByPath" /> except
without a default, and it will throw if the value doesn't exist.
Like <xref linkend="function-library-lib.attrsets.attrByPath" /> except without a default, and it will throw if the value doesn't exist.
</para>
<variablelist>
@ -182,8 +176,7 @@ lib.attrsets.setAttrByPath [ "a" "b" ] 3
</term>
<listitem>
<para>
A list of strings representing the path through the nested attribute set
<varname>set</varname>.
A list of strings representing the path through the nested attribute set <varname>set</varname>.
</para>
</listitem>
</varlistentry>
@ -235,8 +228,7 @@ lib.attrsets.getAttrFromPath [ "x" "y" ] { }
</term>
<listitem>
<para>
The list of attributes to fetch from <varname>set</varname>. Each
attribute name must exist on the attrbitue set.
The list of attributes to fetch from <varname>set</varname>. Each attribute name must exist on the attrbitue set.
</para>
</listitem>
</varlistentry>
@ -282,8 +274,7 @@ error: attribute 'd' missing
</para>
<para>
Provides a backwards-compatible interface of
<function>builtins.attrValues</function> for Nix version older than 1.8.
Provides a backwards-compatible interface of <function>builtins.attrValues</function> for Nix version older than 1.8.
</para>
<variablelist>
@ -311,20 +302,17 @@ lib.attrsets.attrValues { a = 1; b = 2; c = 3; }
<section xml:id="function-library-lib.attrsets.catAttrs">
<title><function>lib.attrsets.catAttrs</function></title>
<subtitle><literal>catAttrs :: String -> AttrSet -> [Any]</literal>
<subtitle><literal>catAttrs :: String -> [AttrSet] -> [Any]</literal>
</subtitle>
<xi:include href="./locations.xml" xpointer="lib.attrsets.catAttrs" />
<para>
Collect each attribute named `attr' from the list of attribute sets,
<varname>sets</varname>. Sets that don't contain the named attribute are
ignored.
Collect each attribute named `attr' from the list of attribute sets, <varname>sets</varname>. Sets that don't contain the named attribute are ignored.
</para>
<para>
Provides a backwards-compatible interface of
<function>builtins.catAttrs</function> for Nix version older than 1.9.
Provides a backwards-compatible interface of <function>builtins.catAttrs</function> for Nix version older than 1.9.
</para>
<variablelist>
@ -334,8 +322,7 @@ lib.attrsets.attrValues { a = 1; b = 2; c = 3; }
</term>
<listitem>
<para>
Attribute name to select from each attribute set in
<varname>sets</varname>.
Attribute name to select from each attribute set in <varname>sets</varname>.
</para>
</listitem>
</varlistentry>
@ -372,8 +359,7 @@ catAttrs "a" [{a = 1;} {b = 0;} {a = 2;}]
<xi:include href="./locations.xml" xpointer="lib.attrsets.filterAttrs" />
<para>
Filter an attribute set by removing all attributes for which the given
predicate return false.
Filter an attribute set by removing all attributes for which the given predicate return false.
</para>
<variablelist>
@ -386,8 +372,7 @@ catAttrs "a" [{a = 1;} {b = 0;} {a = 2;}]
<literal>String -> Any -> Bool</literal>
</para>
<para>
Predicate which returns true to include an attribute, or returns false to
exclude it.
Predicate which returns true to include an attribute, or returns false to exclude it.
</para>
<variablelist>
<varlistentry>
@ -412,8 +397,7 @@ catAttrs "a" [{a = 1;} {b = 0;} {a = 2;}]
</varlistentry>
</variablelist>
<para>
Returns <literal>true</literal> to include the attribute,
<literal>false</literal> to exclude the attribute.
Returns <literal>true</literal> to include the attribute, <literal>false</literal> to exclude the attribute.
</para>
</listitem>
</varlistentry>
@ -447,8 +431,7 @@ filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; }
<xi:include href="./locations.xml" xpointer="lib.attrsets.filterAttrsRecursive" />
<para>
Filter an attribute set recursively by removing all attributes for which the
given predicate return false.
Filter an attribute set recursively by removing all attributes for which the given predicate return false.
</para>
<variablelist>
@ -461,8 +444,7 @@ filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; }
<literal>String -> Any -> Bool</literal>
</para>
<para>
Predicate which returns true to include an attribute, or returns false to
exclude it.
Predicate which returns true to include an attribute, or returns false to exclude it.
</para>
<variablelist>
<varlistentry>
@ -487,8 +469,7 @@ filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; }
</varlistentry>
</variablelist>
<para>
Returns <literal>true</literal> to include the attribute,
<literal>false</literal> to exclude the attribute.
Returns <literal>true</literal> to include the attribute, <literal>false</literal> to exclude the attribute.
</para>
</listitem>
</varlistentry>
@ -557,8 +538,7 @@ lib.attrsets.filterAttrsRecursive
<literal>Any -> Any -> Any</literal>
</para>
<para>
Given a value <varname>val</varname> and a collector
<varname>col</varname>, combine the two.
Given a value <varname>val</varname> and a collector <varname>col</varname>, combine the two.
</para>
<variablelist>
<varlistentry>
@ -578,8 +558,7 @@ lib.attrsets.filterAttrsRecursive
<listitem>
<!-- TODO: make this not bad, use more fold-ey terms -->
<para>
The result of previous <function>op</function> calls with other values
and <function>nul</function>.
The result of previous <function>op</function> calls with other values and <function>nul</function>.
</para>
</listitem>
</varlistentry>
@ -632,9 +611,7 @@ lib.attrsets.foldAttrs
<xi:include href="./locations.xml" xpointer="lib.attrsets.collect" />
<para>
Recursively collect sets that verify a given predicate named
<varname>pred</varname> from the set <varname>attrs</varname>. The recursion
stops when <varname>pred</varname> returns <literal>true</literal>.
Recursively collect sets that verify a given predicate named <varname>pred</varname> from the set <varname>attrs</varname>. The recursion stops when <varname>pred</varname> returns <literal>true</literal>.
</para>
<variablelist>
@ -702,8 +679,7 @@ collect (x: x ? outPath)
<xi:include href="./locations.xml" xpointer="lib.attrsets.nameValuePair" />
<para>
Utility function that creates a <literal>{name, value}</literal> pair as
expected by <function>builtins.listToAttrs</function>.
Utility function that creates a <literal>{name, value}</literal> pair as expected by <function>builtins.listToAttrs</function>.
</para>
<variablelist>
@ -747,13 +723,11 @@ nameValuePair "some" 6
<xi:include href="./locations.xml" xpointer="lib.attrsets.mapAttrs" />
<para>
Apply a function to each element in an attribute set, creating a new
attribute set.
Apply a function to each element in an attribute set, creating a new attribute set.
</para>
<para>
Provides a backwards-compatible interface of
<function>builtins.mapAttrs</function> for Nix version older than 2.1.
Provides a backwards-compatible interface of <function>builtins.mapAttrs</function> for Nix version older than 2.1.
</para>
<variablelist>
@ -814,9 +788,7 @@ lib.attrsets.mapAttrs
<xi:include href="./locations.xml" xpointer="lib.attrsets.mapAttrs-prime" />
<para>
Like <function>mapAttrs</function>, but allows the name of each attribute to
be changed in addition to the value. The applied function should return both
the new name and value as a <function>nameValuePair</function>.
Like <function>mapAttrs</function>, but allows the name of each attribute to be changed in addition to the value. The applied function should return both the new name and value as a <function>nameValuePair</function>.
</para>
<variablelist>
@ -829,10 +801,8 @@ lib.attrsets.mapAttrs
<literal>String -> Any -> { name = String; value = Any }</literal>
</para>
<para>
Given an attribute's name and value, return a new
<link
linkend="function-library-lib.attrsets.nameValuePair">name
value pair</link>.
Given an attribute's name and value, return a new <link
linkend="function-library-lib.attrsets.nameValuePair">name value pair</link>.
</para>
<variablelist>
<varlistentry>
@ -891,8 +861,7 @@ lib.attrsets.mapAttrs' (name: value: lib.attrsets.nameValuePair ("foo_" + name)
<xi:include href="./locations.xml" xpointer="lib.attrsets.mapAttrsToList" />
<para>
Call <varname>fn</varname> for each attribute in the given
<varname>set</varname> and return the result in a list.
Call <varname>fn</varname> for each attribute in the given <varname>set</varname> and return the result in a list.
</para>
<variablelist>
@ -962,9 +931,7 @@ lib.attrsets.mapAttrsToList (name: value: "${name}=${value}")
<xi:include href="./locations.xml" xpointer="lib.attrsets.mapAttrsRecursive" />
<para>
Like <function>mapAttrs</function>, except that it recursively applies
itself to attribute sets. Also, the first argument of the argument function
is a <emphasis>list</emphasis> of the names of the containing attributes.
Like <function>mapAttrs</function>, except that it recursively applies itself to attribute sets. Also, the first argument of the argument function is a <emphasis>list</emphasis> of the names of the containing attributes.
</para>
<variablelist>
@ -989,10 +956,7 @@ lib.attrsets.mapAttrsToList (name: value: "${name}=${value}")
The list of attribute names to this value.
</para>
<para>
For example, the <varname>name_path</varname> for the
<literal>example</literal> string in the attribute set <literal>{ foo
= { bar = "example"; }; }</literal> is <literal>[ "foo" "bar"
]</literal>.
For example, the <varname>name_path</varname> for the <literal>example</literal> string in the attribute set <literal>{ foo = { bar = "example"; }; }</literal> is <literal>[ "foo" "bar" ]</literal>.
</para>
</listitem>
</varlistentry>
@ -1059,11 +1023,7 @@ mapAttrsRecursive
<xi:include href="./locations.xml" xpointer="lib.attrsets.mapAttrsRecursiveCond" />
<para>
Like <function>mapAttrsRecursive</function>, but it takes an additional
predicate function that tells it whether to recursive into an attribute set.
If it returns false, <function>mapAttrsRecursiveCond</function> does not
recurse, but does apply the map function. It is returns true, it does
recurse, and does not apply the map function.
Like <function>mapAttrsRecursive</function>, but it takes an additional predicate function that tells it whether to recursive into an attribute set. If it returns false, <function>mapAttrsRecursiveCond</function> does not recurse, but does apply the map function. It is returns true, it does recurse, and does not apply the map function.
</para>
<variablelist>
@ -1076,8 +1036,7 @@ mapAttrsRecursive
<literal>(AttrSet -> Bool)</literal>
</para>
<para>
Determine if <function>mapAttrsRecursive</function> should recurse deeper
in to the attribute set.
Determine if <function>mapAttrsRecursive</function> should recurse deeper in to the attribute set.
</para>
<variablelist>
<varlistentry>
@ -1114,10 +1073,7 @@ mapAttrsRecursive
The list of attribute names to this value.
</para>
<para>
For example, the <varname>name_path</varname> for the
<literal>example</literal> string in the attribute set <literal>{ foo
= { bar = "example"; }; }</literal> is <literal>[ "foo" "bar"
]</literal>.
For example, the <varname>name_path</varname> for the <literal>example</literal> string in the attribute set <literal>{ foo = { bar = "example"; }; }</literal> is <literal>[ "foo" "bar" ]</literal>.
</para>
</listitem>
</varlistentry>
@ -1181,8 +1137,7 @@ lib.attrsets.mapAttrsRecursiveCond
<xi:include href="./locations.xml" xpointer="lib.attrsets.genAttrs" />
<para>
Generate an attribute set by mapping a function over a list of attribute
names.
Generate an attribute set by mapping a function over a list of attribute names.
</para>
<variablelist>
@ -1241,8 +1196,7 @@ lib.attrsets.genAttrs [ "foo" "bar" ] (name: "x_${name}")
<xi:include href="./locations.xml" xpointer="lib.attrsets.isDerivation" />
<para>
Check whether the argument is a derivation. Any set with <code>{ type =
"derivation"; }</code> counts as a derivation.
Check whether the argument is a derivation. Any set with <code>{ type = "derivation"; }</code> counts as a derivation.
</para>
<variablelist>
@ -1320,8 +1274,7 @@ lib.attrsets.isDerivation "foobar"
</term>
<listitem>
<para>
Condition under which the <varname>as</varname> attribute set is
returned.
Condition under which the <varname>as</varname> attribute set is returned.
</para>
</listitem>
</varlistentry>
@ -1363,8 +1316,7 @@ lib.attrsets.optionalAttrs false { my = "set"; }
<xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrsWithNames" />
<para>
Merge sets of attributes and use the function <varname>f</varname> to merge
attribute values where the attribute name is in <varname>names</varname>.
Merge sets of attributes and use the function <varname>f</varname> to merge attribute values where the attribute name is in <varname>names</varname>.
</para>
<variablelist>
@ -1451,11 +1403,8 @@ lib.attrsets.zipAttrsWithNames
<xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrsWith" />
<para>
Merge sets of attributes and use the function <varname>f</varname> to merge
attribute values. Similar to
<xref
linkend="function-library-lib.attrsets.zipAttrsWithNames" /> where
all key names are passed for <varname>names</varname>.
Merge sets of attributes and use the function <varname>f</varname> to merge attribute values. Similar to <xref
linkend="function-library-lib.attrsets.zipAttrsWithNames" /> where all key names are passed for <varname>names</varname>.
</para>
<variablelist>
@ -1531,9 +1480,7 @@ lib.attrsets.zipAttrsWith
<xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrs" />
<para>
Merge sets of attributes and combine each attribute value in to a list.
Similar to <xref linkend="function-library-lib.attrsets.zipAttrsWith" />
where the merge function returns a list of all values.
Merge sets of attributes and combine each attribute value in to a list. Similar to <xref linkend="function-library-lib.attrsets.zipAttrsWith" /> where the merge function returns a list of all values.
</para>
<variablelist>
@ -1573,12 +1520,7 @@ lib.attrsets.zipAttrs
<xi:include href="./locations.xml" xpointer="lib.attrsets.recursiveUpdateUntil" />
<para>
Does the same as the update operator <literal>//</literal> except that
attributes are merged until the given predicate is verified. The predicate
should accept 3 arguments which are the path to reach the attribute, a part
of the first attribute set and a part of the second attribute set. When the
predicate is verified, the value of the first attribute set is replaced by
the value of the second attribute set.
Does the same as the update operator <literal>//</literal> except that attributes are merged until the given predicate is verified. The predicate should accept 3 arguments which are the path to reach the attribute, a part of the first attribute set and a part of the second attribute set. When the predicate is verified, the value of the first attribute set is replaced by the value of the second attribute set.
</para>
<variablelist>
@ -1681,10 +1623,7 @@ lib.attrsets.recursiveUpdateUntil (path: l: r: path == ["foo"])
<xi:include href="./locations.xml" xpointer="lib.attrsets.recursiveUpdate" />
<para>
A recursive variant of the update operator <literal>//</literal>. The
recursion stops when one of the attribute values is not an attribute set, in
which case the right hand side value takes precedence over the left hand
side value.
A recursive variant of the update operator <literal>//</literal>. The recursion stops when one of the attribute values is not an attribute set, in which case the right hand side value takes precedence over the left hand side value.
</para>
<variablelist>

View File

@ -5,21 +5,14 @@
<title>pkgs.nix-gitignore</title>
<para>
<function>pkgs.nix-gitignore</function> is a function that acts similarly to
<literal>builtins.filterSource</literal> but also allows filtering with the
help of the gitignore format.
<function>pkgs.nix-gitignore</function> is a function that acts similarly to <literal>builtins.filterSource</literal> but also allows filtering with the help of the gitignore format.
</para>
<section xml:id="sec-pkgs-nix-gitignore-usage">
<title>Usage</title>
<para>
<literal>pkgs.nix-gitignore</literal> exports a number of functions, but
you'll most likely need either <literal>gitignoreSource</literal> or
<literal>gitignoreSourcePure</literal>. As their first argument, they both
accept either 1. a file with gitignore lines or 2. a string with gitignore
lines, or 3. a list of either of the two. They will be concatenated into a
single big string.
<literal>pkgs.nix-gitignore</literal> exports a number of functions, but you'll most likely need either <literal>gitignoreSource</literal> or <literal>gitignoreSourcePure</literal>. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string.
</para>
<programlisting><![CDATA[
@ -40,8 +33,7 @@
]]></programlisting>
<para>
These functions are derived from the <literal>Filter</literal> functions by
setting the first filter argument to <literal>(_: _: true)</literal>:
These functions are derived from the <literal>Filter</literal> functions by setting the first filter argument to <literal>(_: _: true)</literal>:
</para>
<programlisting><![CDATA[
@ -50,12 +42,7 @@ gitignoreSource = gitignoreFilterSource (_: _: true);
]]></programlisting>
<para>
Those filter functions accept the same arguments the
<literal>builtins.filterSource</literal> function would pass to its filters,
thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be
extensionally equivalent to <literal>filterSource</literal>. The file is
blacklisted iff it's blacklisted by either your filter or the
gitignoreFilter.
Those filter functions accept the same arguments the <literal>builtins.filterSource</literal> function would pass to its filters, thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be extensionally equivalent to <literal>filterSource</literal>. The file is blacklisted iff it's blacklisted by either your filter or the gitignoreFilter.
</para>
<para>
@ -71,8 +58,7 @@ gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
<title>gitignore files in subdirectories</title>
<para>
If you wish to use a filter that would search for .gitignore files in
subdirectories, just like git does by default, use this function:
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function:
</para>
<programlisting><![CDATA[

View File

@ -1,76 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-ociTools">
<title>pkgs.ociTools</title>
<para>
<varname>pkgs.ociTools</varname> is a set of functions for creating
containers according to the
<link xlink:href="https://github.com/opencontainers/runtime-spec">OCI
container specification v1.0.0</link>. Beyond that it makes no assumptions
about the container runner you choose to use to run the created container.
</para>
<section xml:id="ssec-pkgs-ociTools-buildContainer">
<title>buildContainer</title>
<para>
This function creates a simple OCI container that runs a single command
inside of it. An OCI container consists of a <varname>config.json</varname>
and a rootfs directory.The nix store of the container will contain all
referenced dependencies of the given command.
</para>
<para>
The parameters of <varname>buildContainer</varname> with an example value
are described below:
</para>
<example xml:id='ex-ociTools-buildContainer'>
<title>Build Container</title>
<programlisting>
buildContainer {
args = [ (with pkgs; writeScript "run.sh" ''
#!${bash}/bin/bash
${coreutils}/bin/exec ${bash}/bin/bash
'').outPath ]; <co xml:id='ex-ociTools-buildContainer-1' />
mounts = {
"/data" = {
type = "none";
source = "/var/lib/mydata";
options = [ "bind" ];
};
};<co xml:id='ex-ociTools-buildContainer-2' />
readonly = false; <co xml:id='ex-ociTools-buildContainer-3' />
}
</programlisting>
<calloutlist>
<callout arearefs='ex-ociTools-buildContainer-1'>
<para>
<varname>args</varname> specifies a set of arguments to run inside the container.
This is the only required argument for <varname>buildContainer</varname>.
All referenced packages inside the derivation will be made available
inside the container
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-2'>
<para>
<varname>mounts</varname> specifies additional mount points chosen by the
user. By default only a minimal set of necessary filesystems are mounted
into the container (e.g procfs, cgroupfs)
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-3'>
<para>
<varname>readonly</varname> makes the container's rootfs read-only if it is set to true.
The default value is false <literal>false</literal>.
</para>
</callout>
</calloutlist>
</example>
</section>
</section>

View File

@ -1,212 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-overrides">
<title>Overriding</title>
<para>
Sometimes one wants to override parts of <literal>nixpkgs</literal>, e.g.
derivation attributes, the results of derivations.
</para>
<para>
These functions are used to make changes to packages, returning only single
packages. <link xlink:href="#chap-overlays">Overlays</link>, on the other
hand, can be used to combine the overridden packages across the entire
package set of Nixpkgs.
</para>
<section xml:id="sec-pkg-override">
<title>&lt;pkg&gt;.override</title>
<para>
The function <varname>override</varname> is usually available for all the
derivations in the nixpkgs expression (<varname>pkgs</varname>).
</para>
<para>
It is used to override the arguments passed to a function.
</para>
<para>
Example usages:
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
<!-- TODO: move below programlisting to a new section about extending and overlays
and reference it
-->
<programlisting>
import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})]};
</programlisting>
<programlisting>
mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
}
</programlisting>
</para>
<para>
In the first example, <varname>pkgs.foo</varname> is the result of a
function call with some default arguments, usually a derivation. Using
<varname>pkgs.foo.override</varname> will call the same function with the
given new arguments.
</para>
</section>
<section xml:id="sec-pkg-overrideAttrs">
<title>&lt;pkg&gt;.overrideAttrs</title>
<para>
The function <varname>overrideAttrs</varname> allows overriding the
attribute set passed to a <varname>stdenv.mkDerivation</varname> call,
producing a new derivation based on the original one. This function is
available on all derivations produced by the
<varname>stdenv.mkDerivation</varname> function, which is most packages in
the nixpkgs expression <varname>pkgs</varname>.
</para>
<para>
Example usage:
<programlisting>
helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
separateDebugInfo = true;
});
</programlisting>
</para>
<para>
In the above example, the <varname>separateDebugInfo</varname> attribute is
overridden to be true, thus building debug info for
<varname>helloWithDebug</varname>, while all other attributes will be
retained from the original <varname>hello</varname> package.
</para>
<para>
The argument <varname>oldAttrs</varname> is conventionally used to refer to
the attr set originally passed to <varname>stdenv.mkDerivation</varname>.
</para>
<note>
<para>
Note that <varname>separateDebugInfo</varname> is processed only by the
<varname>stdenv.mkDerivation</varname> function, not the generated, raw Nix
derivation. Thus, using <varname>overrideDerivation</varname> will not work
in this case, as it overrides only the attributes of the final derivation.
It is for this reason that <varname>overrideAttrs</varname> should be
preferred in (almost) all cases to <varname>overrideDerivation</varname>,
i.e. to allow using <varname>stdenv.mkDerivation</varname> to process input
arguments, as well as the fact that it is easier to use (you can use the
same attribute names you see in your Nix code, instead of the ones
generated (e.g. <varname>buildInputs</varname> vs
<varname>nativeBuildInputs</varname>), and it involves less typing).
</para>
</note>
</section>
<section xml:id="sec-pkg-overrideDerivation">
<title>&lt;pkg&gt;.overrideDerivation</title>
<warning>
<para>
You should prefer <varname>overrideAttrs</varname> in almost all cases, see
its documentation for the reasons why.
<varname>overrideDerivation</varname> is not deprecated and will continue
to work, but is less nice to use and does not have as many abilities as
<varname>overrideAttrs</varname>.
</para>
</warning>
<warning>
<para>
Do not use this function in Nixpkgs as it evaluates a Derivation before
modifying it, which breaks package abstraction and removes error-checking
of function arguments. In addition, this evaluation-per-function
application incurs a performance penalty, which can become a problem if
many overrides are used. It is only intended for ad-hoc customisation, such
as in <filename>~/.config/nixpkgs/config.nix</filename>.
</para>
</warning>
<para>
The function <varname>overrideDerivation</varname> creates a new derivation
based on an existing one by overriding the original's attributes with the
attribute set produced by the specified function. This function is available
on all derivations defined using the <varname>makeOverridable</varname>
function. Most standard derivation-producing functions, such as
<varname>stdenv.mkDerivation</varname>, are defined using this function,
which means most packages in the nixpkgs expression,
<varname>pkgs</varname>, have this function.
</para>
<para>
Example usage:
<programlisting>
mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
};
patches = [];
});
</programlisting>
</para>
<para>
In the above example, the <varname>name</varname>, <varname>src</varname>,
and <varname>patches</varname> of the derivation will be overridden, while
all other attributes will be retained from the original derivation.
</para>
<para>
The argument <varname>oldAttrs</varname> is used to refer to the attribute
set of the original derivation.
</para>
<note>
<para>
A package's attributes are evaluated *before* being modified by the
<varname>overrideDerivation</varname> function. For example, the
<varname>name</varname> attribute reference in <varname>url =
"mirror://gnu/hello/${name}.tar.gz";</varname> is filled-in *before* the
<varname>overrideDerivation</varname> function modifies the attribute set.
This means that overriding the <varname>name</varname> attribute, in this
example, *will not* change the value of the <varname>url</varname>
attribute. Instead, we need to override both the <varname>name</varname>
*and* <varname>url</varname> attributes.
</para>
</note>
</section>
<section xml:id="sec-lib-makeOverridable">
<title>lib.makeOverridable</title>
<para>
The function <varname>lib.makeOverridable</varname> is used to make the
result of a function easily customizable. This utility only makes sense for
functions that accept an argument set and return an attribute set.
</para>
<para>
Example usage:
<programlisting>
f = { a, b }: { result = a+b; };
c = lib.makeOverridable f { a = 1; b = 2; };
</programlisting>
</para>
<para>
The variable <varname>c</varname> is the value of the <varname>f</varname>
function applied with some default arguments. Hence the value of
<varname>c.result</varname> is <literal>3</literal>, in this example.
</para>
<para>
The variable <varname>c</varname> however also has some additional
functions, like <link linkend="sec-pkg-override">c.override</link> which can
be used to override the default arguments. In this example the value of
<varname>(c.override { a = 4; }).result</varname> is 6.
</para>
</section>
</section>

View File

@ -5,16 +5,12 @@
<title>prefer-remote-fetch overlay</title>
<para>
<function>prefer-remote-fetch</function> is an overlay that download sources
on remote builder. This is useful when the evaluating machine has a slow
upload while the builder can fetch faster directly from the source. To use
it, put the following snippet as a new overlay:
<function>prefer-remote-fetch</function> is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay:
<programlisting>
self: super:
(super.prefer-remote-fetch self super)
</programlisting>
A full configuration example for that sets the overlay up for your own
account, could look like this
A full configuration example for that sets the overlay up for your own account, could look like this
<screen>
<prompt>$ </prompt>mkdir ~/.config/nixpkgs/overlays/
<prompt>$ </prompt>cat &gt; ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix &lt;&lt;EOF

View File

@ -1,26 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-mkShell">
<title>pkgs.mkShell</title>
<para>
<function>pkgs.mkShell</function> is a special kind of derivation that is
only useful when using it combined with <command>nix-shell</command>. It will
in fact fail to instantiate when invoked with <command>nix-build</command>.
</para>
<section xml:id="sec-pkgs-mkShell-usage">
<title>Usage</title>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# this will make all the build inputs from hello and gnutar
# available to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
buildInputs = [ pkgs.gnumake ];
}
]]></programlisting>
</section>
</section>

View File

@ -1,74 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-snapTools">
<title>pkgs.snapTools</title>
<para>
<varname>pkgs.snapTools</varname> is a set of functions for creating
Snapcraft images. Snap and Snapcraft is not used to perform these operations.
</para>
<section xml:id="ssec-pkgs-snapTools-makeSnap-signature">
<title>The makeSnap Function</title>
<para>
<function>makeSnap</function> takes a single named argument,
<parameter>meta</parameter>. This argument mirrors
<link xlink:href="https://docs.snapcraft.io/snap-format">the upstream
<filename>snap.yaml</filename> format</link> exactly.
</para>
<para>
The <parameter>base</parameter> should not be be specified, as
<function>makeSnap</function> will force set it.
</para>
<para>
Currently, <function>makeSnap</function> does not support creating GUI
stubs.
</para>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-hello">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-hello">
<title>Making a Hello World Snap</title>
<para>
The following expression packages GNU Hello as a Snapcraft snap.
</para>
<programlisting><xi:include href="./snap/example-hello.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with
<command>snap install ./result --dangerous</command>.
<command>hello</command> will now be the Snapcraft version of the package.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-firefox">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-firefox">
<title>Making a Graphical Snap</title>
<para>
Graphical programs require many more integrations with the host. This
example uses Firefox as an example, because it is one of the most
complicated programs we could package.
</para>
<programlisting><xi:include href="./snap/example-firefox.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with
<command>snap install ./result --dangerous</command>.
<command>nix-example-firefox</command> will now be the Snapcraft version of
the Firefox package.
</para>
<para>
The specific meaning behind plugs can be looked up in the
<link xlink:href="https://docs.snapcraft.io/supported-interfaces">Snapcraft
interface documentation</link>.
</para>
</example>
</section>
</section>

View File

@ -1,113 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-trivial-builders">
<title>Trivial builders</title>
<para>
Nixpkgs provides a couple of functions that help with building derivations.
The most important one, <function>stdenv.mkDerivation</function>, has already
been documented above. The following functions wrap
<function>stdenv.mkDerivation</function>, making it easier to use in certain
cases.
</para>
<variablelist>
<varlistentry>
<term>
<literal>runCommand</literal>
</term>
<listitem>
<para>
This takes three arguments, <literal>name</literal>,
<literal>env</literal>, and <literal>buildCommand</literal>.
<literal>name</literal> is just the name that Nix will append to the store
path in the same way that <literal>stdenv.mkDerivation</literal> uses its
<literal>name</literal> attribute. <literal>env</literal> is an attribute
set specifying environment variables that will be set for this derivation.
These attributes are then passed to the wrapped
<literal>stdenv.mkDerivation</literal>. <literal>buildCommand</literal>
specifies the commands that will be run to create this derivation. Note
that you will need to create <literal>$out</literal> for Nix to register
the command as successful.
</para>
<para>
An example of using <literal>runCommand</literal> is provided below.
</para>
<programlisting>
(import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out
echo I can write data to the Nix store > $out/message
echo I can also run basic commands like:
echo ls
ls
echo whoami
whoami
echo date
date
''
</programlisting>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>runCommandCC</literal>
</term>
<listitem>
<para>
This works just like <literal>runCommand</literal>. The only difference is
that it also provides a C compiler in <literal>buildCommand</literal>s
environment. To minimize your dependencies, you should only use this if
you are sure you will need a C compiler as part of running your command.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>writeTextFile</literal>, <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, <literal>writeScriptBin</literal>
</term>
<listitem>
<para>
These functions write <literal>text</literal> to the Nix store. This is
useful for creating scripts from Nix expressions.
<literal>writeTextFile</literal> takes an attribute set and expects two
arguments, <literal>name</literal> and <literal>text</literal>.
<literal>name</literal> corresponds to the name used in the Nix store
path. <literal>text</literal> will be the contents of the file. You can
also set <literal>executable</literal> to true to make this file have the
executable bit set.
</para>
<para>
Many more commands wrap <literal>writeTextFile</literal> including
<literal>writeText</literal>, <literal>writeTextDir</literal>,
<literal>writeScript</literal>, and <literal>writeScriptBin</literal>.
These are convenience functions over <literal>writeTextFile</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>symlinkJoin</literal>
</term>
<listitem>
<para>
This can be used to put many derivations into the same directory
structure. It works by creating a new derivation and adding symlinks to
each of the paths listed. It expects two arguments,
<literal>name</literal>, and <literal>paths</literal>.
<literal>name</literal> is the name used in the Nix store path for the
created derivation. <literal>paths</literal> is a list of paths that will
be symlinked. These paths can be to Nix store derivations or any other
subdirectory contained within.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -1,51 +0,0 @@
---
title: Introduction
author: Frederik Rietdijk
date: 2015-11-25
---
# Introduction
The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the
[Nix package manager](http://nixos.org/nix/), released under a
[permissive MIT/X11 license](https://github.com/NixOS/nixpkgs/blob/master/COPYING).
Packages are available for several platforms, and can be used with the Nix
package manager on most GNU/Linux distributions as well as NixOS.
This manual primarily describes how to write packages for the Nix Packages collection
(Nixpkgs). Thus its mainly for packagers and developers who want to add packages to
Nixpkgs. If you like to learn more about the Nix package manager and the Nix
expression language, then you are kindly referred to the [Nix manual](http://nixos.org/nix/manual/).
## Overview of Nixpkgs
Nix expressions describe how to build packages from source and are collected in
the [nixpkgs repository](https://github.com/NixOS/nixpkgs). Also included in the
collection are Nix expressions for
[NixOS modules](http://nixos.org/nixos/manual/index.html#sec-writing-modules).
With these expressions the Nix package manager can build binary packages.
Packages, including the Nix packages collection, are distributed through
[channels](http://nixos.org/nix/manual/#sec-channels). The collection is
distributed for users of Nix on non-NixOS distributions through the channel
`nixpkgs`. Users of NixOS generally use one of the `nixos-*` channels, e.g.
`nixos-16.03`, which includes all packages and modules for the stable NixOS
16.03. Stable NixOS releases are generally only given
security updates. More up to date packages and modules are available via the
`nixos-unstable` channel.
Both `nixos-unstable` and `nixpkgs` follow the `master` branch of the Nixpkgs
repository, although both do lag the `master` branch by generally
[a couple of days](http://howoldis.herokuapp.com/). Updates to a channel are
distributed as soon as all tests for that channel pass, e.g.
[this table](http://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
shows the status of tests for the `nixpkgs` channel.
The tests are conducted by a cluster called [Hydra](http://nixos.org/hydra/),
which also builds binary packages from the Nix expressions in Nixpkgs for
`x86_64-linux`, `i686-linux` and `x86_64-darwin`.
The binaries are made available via a [binary cache](https://cache.nixos.org).
The current Nix expressions of the channels are available in the
[`nixpkgs-channels`](https://github.com/NixOS/nixpkgs-channels) repository,
which has branches corresponding to the available channels.

View File

@ -95,7 +95,7 @@ $ nix-build
The Android SDK gets deployed with all desired plugin versions.
We can also deploy subsets of the Android SDK. For example, to only the the
We can also deploy subsets of the Android SDK. For example, to only the
`platform-tools` package, you can evaluate the following expression:
```nix

View File

@ -7,12 +7,7 @@
<title>Introduction</title>
<para>
In this document and related Nix expressions, we use the term,
<emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name of
the Erlang Virtual Machine and, as far as we're concerned, from a packaging
perspective, all languages that run on the BEAM are interchangeable. That
which varies, like the build system, is transparent to users of any given
BEAM package, so we make no distinction.
In this document and related Nix expressions, we use the term, <emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name of the Erlang Virtual Machine and, as far as we're concerned, from a packaging perspective, all languages that run on the BEAM are interchangeable. That which varies, like the build system, is transparent to users of any given BEAM package, so we make no distinction.
</para>
</section>
@ -20,57 +15,32 @@
<title>Structure</title>
<para>
All BEAM-related expressions are available via the top-level
<literal>beam</literal> attribute, which includes:
All BEAM-related expressions are available via the top-level <literal>beam</literal> attribute, which includes:
</para>
<itemizedlist>
<listitem>
<para>
<literal>interpreters</literal>: a set of compilers running on the BEAM,
including multiple Erlang/OTP versions
(<literal>beam.interpreters.erlangR19</literal>, etc), Elixir
(<literal>beam.interpreters.elixir</literal>) and LFE
(<literal>beam.interpreters.lfe</literal>).
<literal>interpreters</literal>: a set of compilers running on the BEAM, including multiple Erlang/OTP versions (<literal>beam.interpreters.erlangR19</literal>, etc), Elixir (<literal>beam.interpreters.elixir</literal>) and LFE (<literal>beam.interpreters.lfe</literal>).
</para>
</listitem>
<listitem>
<para>
<literal>packages</literal>: a set of package sets, each compiled with a
specific Erlang/OTP version, e.g.
<literal>beam.packages.erlangR19</literal>.
<literal>packages</literal>: a set of package builders (Mix and rebar3), each compiled with a specific Erlang/OTP version, e.g. <literal>beam.packages.erlangR19</literal>.
</para>
</listitem>
</itemizedlist>
<para>
The default Erlang compiler, defined by
<literal>beam.interpreters.erlang</literal>, is aliased as
<literal>erlang</literal>. The default BEAM package set is defined by
<literal>beam.packages.erlang</literal> and aliased at the top level as
<literal>beamPackages</literal>.
The default Erlang compiler, defined by <literal>beam.interpreters.erlang</literal>, is aliased as <literal>erlang</literal>. The default BEAM package set is defined by <literal>beam.packages.erlang</literal> and aliased at the top level as <literal>beamPackages</literal>.
</para>
<para>
To create a package set built with a custom Erlang version, use the lambda,
<literal>beam.packagesWith</literal>, which accepts an Erlang/OTP derivation
and produces a package set similar to
<literal>beam.packages.erlang</literal>.
To create a package builder built with a custom Erlang version, use the lambda, <literal>beam.packagesWith</literal>, which accepts an Erlang/OTP derivation and produces a package builder similar to <literal>beam.packages.erlang</literal>.
</para>
<para>
Many Erlang/OTP distributions available in
<literal>beam.interpreters</literal> have versions with ODBC and/or Java
enabled. For example, there's
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which corresponds
to <literal>beam.interpreters.erlangR19</literal>.
</para>
<para xml:id="erlang-call-package">
We also provide the lambda,
<literal>beam.packages.erlang.callPackage</literal>, which simplifies
writing BEAM package definitions by injecting all packages from
<literal>beam.packages.erlang</literal> into the top-level context.
Many Erlang/OTP distributions available in <literal>beam.interpreters</literal> have versions with ODBC and/or Java enabled or without wx (no observer support). For example, there's <literal>beam.interpreters.erlangR22_odbc_javac</literal>, which corresponds to <literal>beam.interpreters.erlangR22</literal> and <literal>beam.interpreters.erlangR22_nox</literal>, which corresponds to <literal>beam.interpreters.erlangR22</literal>.
</para>
</section>
@ -81,28 +51,7 @@
<title>Rebar3</title>
<para>
By default, Rebar3 wants to manage its own dependencies. This is perfectly
acceptable in the normal, non-Nix setup, but in the Nix world, it is not.
To rectify this, we provide two versions of Rebar3:
<itemizedlist>
<listitem>
<para>
<literal>rebar3</literal>: patched to remove the ability to download
anything. When not running it via <literal>nix-shell</literal> or
<literal>nix-build</literal>, it's probably not going to work as
desired.
</para>
</listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It should
work exactly as would any other version of Rebar3. Any Erlang package
should rely on <literal>rebar3</literal> instead. See
<xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
We provide a version of Rebar3, under <literal>rebar3</literal>. We also provide a helper to fetch Rebar3 dependencies from a lockfile under <literal>fetchRebar3Deps</literal>.
</para>
</section>
@ -110,10 +59,7 @@
<title>Mix &amp; Erlang.mk</title>
<para>
Both Mix and Erlang.mk work exactly as expected. There is a bootstrap
process that needs to be run for both, however, which is supported by the
<literal>buildMix</literal> and <literal>buildErlangMk</literal>
derivations, respectively.
Both Mix and Erlang.mk work exactly as expected. There is a bootstrap process that needs to be run for both, however, which is supported by the <literal>buildMix</literal> and <literal>buildErlangMk</literal> derivations, respectively.
</para>
</section>
</section>
@ -122,41 +68,14 @@
<title>How to Install BEAM Packages</title>
<para>
BEAM packages are not registered at the top level, simply because they are
not relevant to the vast majority of Nix users. They are installable using
the <literal>beam.packages.erlang</literal> attribute set (aliased as
<literal>beamPackages</literal>), which points to packages built by the
default Erlang/OTP version in Nixpkgs, as defined by
<literal>beam.interpreters.erlang</literal>. To list the available packages
in <literal>beamPackages</literal>, use the following command:
BEAM builders are not registered at the top level, simply because they are not relevant to the vast majority of Nix users.
To install any of those builders into your profile, refer to them by their attribute path <literal>beamPackages.rebar3</literal>:
</para>
<screen>
<prompt>$ </prompt>nix-env -f &quot;&lt;nixpkgs&gt;&quot; -qaP -A beamPackages
beamPackages.esqlite esqlite-0.2.1
beamPackages.goldrush goldrush-0.1.7
beamPackages.ibrowse ibrowse-4.2.2
beamPackages.jiffy jiffy-0.14.5
beamPackages.lager lager-3.0.2
beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0
</screen>
<para>
To install any of those packages into your profile, refer to them by their
attribute path (first column):
</para>
<screen>
<prompt>$ </prompt>nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</screen>
<para>
The attribute path of any BEAM package corresponds to the name of that
particular package in <link xlink:href="https://hex.pm">Hex</link> or its
OTP Application/Release name.
</para>
</section>
<screen>
<prompt>$ </prompt>nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.rebar3
</screen>
</section>
<section xml:id="packaging-beam-applications">
<title>Packaging BEAM Applications</title>
@ -168,53 +87,11 @@ beamPackages.rebar3-pc pc-1.1.0
<title>Rebar3 Packages</title>
<para>
The Nix function, <literal>buildRebar3</literal>, defined in
<literal>beam.packages.erlang.buildRebar3</literal> and aliased at the top
level, can be used to build a derivation that understands how to build a
Rebar3 project. For example, we can build
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>
as follows:
</para>
<programlisting>
{ stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec {
name = "hex2nix";
version = "0.0.1";
src = fetchFromGitHub {
owner = "ericbmerritt";
repo = "hex2nix";
rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
};
beamDeps = [ ibrowse jsx erlware_commons ];
}
</programlisting>
<para>
Such derivations are callable with
<literal>beam.packages.erlang.callPackage</literal> (see
<xref
linkend="erlang-call-package"/>). To call this package using
the normal <literal>callPackage</literal>, refer to dependency packages
via <literal>beamPackages</literal>, e.g.
<literal>beamPackages.ibrowse</literal>.
The Nix function, <literal>buildRebar3</literal>, defined in <literal>beam.packages.erlang.buildRebar3</literal> and aliased at the top level, can be used to build a derivation that understands how to build a Rebar3 project.
</para>
<para>
Notably, <literal>buildRebar3</literal> includes
<literal>beamDeps</literal>, while <literal>stdenv.mkDerivation</literal>
does not. BEAM dependencies added there will be correctly handled by the
system.
</para>
<para>
If a package needs to compile native code via Rebar3's port compilation
mechanism, add <literal>compilePort = true;</literal> to the derivation.
If a package needs to compile native code via Rebar3's port compilation mechanism, add <literal>compilePort = true;</literal> to the derivation.
</para>
</section>
@ -222,96 +99,21 @@ buildRebar3 rec {
<title>Erlang.mk Packages</title>
<para>
Erlang.mk functions similarly to Rebar3, except we use
<literal>buildErlangMk</literal> instead of
<literal>buildRebar3</literal>.
Erlang.mk functions similarly to Rebar3, except we use <literal>buildErlangMk</literal> instead of <literal>buildRebar3</literal>.
</para>
<programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk {
name = "cowboy";
version = "1.0.4";
src = fetchHex {
pkg = "cowboy";
version = "1.0.4";
sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
meta = {
description = ''
Small, fast, modular HTTP server written in Erlang
'';
license = stdenv.lib.licenses.isc;
homepage = https://github.com/ninenines/cowboy;
};
}
</programlisting>
</section>
<section xml:id="mix-packages">
<title>Mix Packages</title>
<para>
Mix functions similarly to Rebar3, except we use
<literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
Mix functions similarly to Rebar3, except we use <literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
</para>
<programlisting>
{ buildMix, fetchHex, plug, absinthe }:
buildMix {
name = "absinthe_plug";
version = "1.0.0";
src = fetchHex {
pkg = "absinthe_plug";
version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
};
beamDeps = [ plug absinthe ];
meta = {
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
<para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para>
<programlisting>
{ buildHex, buildMix, plug, absinthe }:
buildHex {
name = "absinthe_plug";
version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
builder = buildMix;
beamDeps = [ plug absinthe ];
meta = {
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
</section>
</section>
</section>
@ -319,75 +121,13 @@ buildHex {
<section xml:id="how-to-develop">
<title>How to Develop</title>
<section xml:id="accessing-an-environment">
<title>Accessing an Environment</title>
<para>
Often, we simply want to access a valid environment that contains a
specific package and its dependencies. We can accomplish that with the
<literal>env</literal> attribute of a derivation. For example, let's say we
want to access an Erlang REPL with <literal>ibrowse</literal> loaded up. We
could do the following:
</para>
<screen>
<prompt>$ </prompt><userinput>nix-shell -A beamPackages.ibrowse.env --run "erl"</userinput>
<computeroutput>Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G)</computeroutput>
<prompt>1> </prompt><userinput>m(ibrowse).</userinput>
<computeroutput>Module: ibrowse
MD5: 3b3e0137d0cbb28070146978a3392945
Compiled: January 10 2016, 23:34
Object file: /nix/store/g1rlf65rdgjs4abbyj4grp37ry7ywivj-ibrowse-4.2.2/lib/erlang/lib/ibrowse-4.2.2/ebin/ibrowse.beam
Compiler options: [{outdir,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/ebin"},
debug_info,debug_info,nowarn_shadow_vars,
warn_unused_import,warn_unused_vars,warnings_as_errors,
{i,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/include"}]
Exports:
add_config/1 send_req_direct/7
all_trace_off/0 set_dest/3
code_change/3 set_max_attempts/3
get_config_value/1 set_max_pipeline_size/3
get_config_value/2 set_max_sessions/3
get_metrics/0 show_dest_status/0
get_metrics/2 show_dest_status/1
handle_call/3 show_dest_status/2
handle_cast/2 spawn_link_worker_process/1
handle_info/2 spawn_link_worker_process/2
init/1 spawn_worker_process/1
module_info/0 spawn_worker_process/2
module_info/1 start/0
rescan_config/0 start_link/0
rescan_config/1 stop/0
send_req/3 stop_worker_process/1
send_req/4 stream_close/1
send_req/5 stream_next/1
send_req/6 terminate/2
send_req_direct/4 trace_off/0
send_req_direct/5 trace_off/2
send_req_direct/6 trace_on/0
trace_on/2
ok</computeroutput>
<prompt>2></prompt>
</screen>
<para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
to this functionality.
</para>
</section>
<section xml:id="creating-a-shell">
<title>Creating a Shell</title>
<para>
Getting access to an environment often isn't enough to do real development.
Usually, we need to create a <literal>shell.nix</literal> file and do our
development inside of the environment specified therein. This file looks a
lot like the packaging described above, except that <literal>src</literal>
points to the project root and we call the package directly.
</para>
<para>
Usually, we need to create a <literal>shell.nix</literal> file and do our development inside of the environment specified therein. Just install your version of erlang and other interpreter, and then user your normal build tools.
As an example with elixir:
</para>
<programlisting>
{ pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }:
@ -396,133 +136,24 @@ with pkgs;
let
f = { buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 {
name = "hex2nix";
version = "0.1.0";
src = ./.;
beamDeps = [ ibrowse jsx erlware_commons ];
};
drv = beamPackages.callPackage f {};
elixir = beam.packages.erlangR22.elixir_1_9;
in
mkShell {
buildInputs = [ elixir ];
drv
ERL_INCLUDE_PATH="${erlang}/lib/erlang/usr/include";
}
</programlisting>
<section xml:id="building-in-a-shell">
<title>Building in a Shell (for Mix Projects)</title>
<para>
We can leverage the support of the derivation, irrespective of the build
derivation, by calling the commands themselves.
</para>
<programlisting>
# =============================================================================
# Variables
# =============================================================================
NIX_TEMPLATES := "$(CURDIR)/nix-templates"
TARGET := "$(PREFIX)"
PROJECT_NAME := thorndyke
NIXPKGS=../nixpkgs
NIX_PATH=nixpkgs=$(NIXPKGS)
NIX_SHELL=nix-shell -I "$(NIX_PATH)" --pure
# =============================================================================
# Rules
# =============================================================================
.PHONY= all test clean repl shell build test analyze configure install \
test-nix-install publish plt analyze
all: build
guard-%:
@ if [ "${${*}}" == "" ]; then \
echo "Environment variable $* not set"; \
exit 1; \
fi
clean:
rm -rf _build
rm -rf .cache
repl:
$(NIX_SHELL) --run "iex -pa './_build/prod/lib/*/ebin'"
shell:
$(NIX_SHELL)
configure:
$(NIX_SHELL) --command 'eval "$$configurePhase"'
build: configure
$(NIX_SHELL) --command 'eval "$$buildPhase"'
install:
$(NIX_SHELL) --command 'eval "$$installPhase"'
test:
$(NIX_SHELL) --command 'mix test --no-start --no-deps-check'
plt:
$(NIX_SHELL) --run "mix dialyzer.plt --no-deps-check"
analyze: build plt
$(NIX_SHELL) --run "mix dialyzer --no-compile"
</programlisting>
<para>
Using a <literal>shell.nix</literal> as described (see
<xref
linkend="creating-a-shell"/>) should just work. Aside from
<literal>test</literal>, <literal>plt</literal>, and
<literal>analyze</literal>, the Make targets work just fine for all of the
build derivations.
Using a <literal>shell.nix</literal> as described (see <xref
linkend="creating-a-shell"/>) should just work.
</para>
</section>
</section>
</section>
<section xml:id="generating-packages-from-hex-with-hex2nix">
<title>Generating Packages from Hex with <literal>hex2nix</literal></title>
<para>
Updating the <link xlink:href="https://hex.pm">Hex</link> package set
requires
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>.
Given the path to the Erlang modules (usually
<literal>pkgs/development/erlang-modules</literal>), it will dump a file
called <literal>hex-packages.nix</literal>, containing all the packages that
use a recognized build system in
<link
xlink:href="https://hex.pm">Hex</link>. It can't be determined,
however, whether every package is buildable.
</para>
<para>
To make life easier for our users, try to build every
<link
xlink:href="https://hex.pm">Hex</link> package and remove those
that fail. To do that, simply run the following command in the root of your
<literal>nixpkgs</literal> repository:
</para>
<screen>
<prompt>$ </prompt>nix-build -A beamPackages
</screen>
<para>
That will attempt to build every package in <literal>beamPackages</literal>.
Then manually remove those that fail. Hopefully, someone will improve
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>
in the future to automate the process.
</para>
</section>
</section>

View File

@ -4,32 +4,22 @@
<title>Bower</title>
<para>
<link xlink:href="http://bower.io">Bower</link> is a package manager for web
site front-end components. Bower packages (comprising of build artefacts and
sometimes sources) are stored in <command>git</command> repositories,
typically on Github. The package registry is run by the Bower team with
package metadata coming from the <filename>bower.json</filename> file within
each package.
<link xlink:href="http://bower.io">Bower</link> is a package manager for web site front-end components. Bower packages (comprising of build artefacts and sometimes sources) are stored in <command>git</command> repositories, typically on Github. The package registry is run by the Bower team with package metadata coming from the <filename>bower.json</filename> file within each package.
</para>
<para>
The end result of running Bower is a <filename>bower_components</filename>
directory which can be included in the web app's build process.
The end result of running Bower is a <filename>bower_components</filename> directory which can be included in the web app's build process.
</para>
<para>
Bower can be run interactively, by installing
<varname>nodePackages.bower</varname>. More interestingly, the Bower
components can be declared in a Nix derivation, with the help of
<varname>nodePackages.bower2nix</varname>.
Bower can be run interactively, by installing <varname>nodePackages.bower</varname>. More interestingly, the Bower components can be declared in a Nix derivation, with the help of <varname>nodePackages.bower2nix</varname>.
</para>
<section xml:id="ssec-bower2nix-usage">
<title><command>bower2nix</command> usage</title>
<para>
Suppose you have a <filename>bower.json</filename> with the following
contents:
Suppose you have a <filename>bower.json</filename> with the following contents:
<example xml:id="ex-bowerJson">
<title><filename>bower.json</filename></title>
<programlisting language="json">
@ -45,8 +35,7 @@
</para>
<para>
Running <command>bower2nix</command> will produce something like the
following output:
Running <command>bower2nix</command> will produce something like the following output:
<programlisting language="nix">
<![CDATA[{ fetchbower, buildEnv }:
buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
@ -58,15 +47,11 @@ buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
</para>
<para>
Using the <command>bower2nix</command> command line arguments, the output
can be redirected to a file. A name like
<filename>bower-packages.nix</filename> would be fine.
Using the <command>bower2nix</command> command line arguments, the output can be redirected to a file. A name like <filename>bower-packages.nix</filename> would be fine.
</para>
<para>
The resulting derivation is a union of all the downloaded Bower packages
(and their dependencies). To use it, they still need to be linked together
by Bower, which is where <varname>buildBowerComponents</varname> is useful.
The resulting derivation is a union of all the downloaded Bower packages (and their dependencies). To use it, they still need to be linked together by Bower, which is where <varname>buildBowerComponents</varname> is useful.
</para>
</section>
@ -74,10 +59,7 @@ buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
<title><varname>buildBowerComponents</varname> function</title>
<para>
The function is implemented in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/bower-modules/generic/default.nix">
<filename>pkgs/development/bower-modules/generic/default.nix</filename></link>.
Example usage:
The function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/bower-modules/generic/default.nix"> <filename>pkgs/development/bower-modules/generic/default.nix</filename></link>. Example usage:
<example xml:id="ex-buildBowerComponents">
<title>buildBowerComponents</title>
<programlisting language="nix">
@ -91,34 +73,27 @@ bowerComponents = buildBowerComponents {
</para>
<para>
In <xref linkend="ex-buildBowerComponents" />, the following arguments are
of special significance to the function:
In <xref linkend="ex-buildBowerComponents" />, the following arguments are of special significance to the function:
<calloutlist>
<callout arearefs="ex-buildBowerComponents-1">
<para>
<varname>generated</varname> specifies the file which was created by
<command>bower2nix</command>.
<varname>generated</varname> specifies the file which was created by <command>bower2nix</command>.
</para>
</callout>
<callout arearefs="ex-buildBowerComponents-2">
<para>
<varname>src</varname> is your project's sources. It needs to contain a
<filename>bower.json</filename> file.
<varname>src</varname> is your project's sources. It needs to contain a <filename>bower.json</filename> file.
</para>
</callout>
</calloutlist>
</para>
<para>
<varname>buildBowerComponents</varname> will run Bower to link together the
output of <command>bower2nix</command>, resulting in a
<filename>bower_components</filename> directory which can be used.
<varname>buildBowerComponents</varname> will run Bower to link together the output of <command>bower2nix</command>, resulting in a <filename>bower_components</filename> directory which can be used.
</para>
<para>
Here is an example of a web frontend build process using
<command>gulp</command>. You might use <command>grunt</command>, or anything
else.
Here is an example of a web frontend build process using <command>gulp</command>. You might use <command>grunt</command>, or anything else.
</para>
<example xml:id="ex-bowerGulpFile">
@ -174,21 +149,17 @@ pkgs.stdenv.mkDerivation {
<calloutlist>
<callout arearefs="ex-buildBowerComponentsDefault-1">
<para>
The result of <varname>buildBowerComponents</varname> is an input to the
frontend build.
The result of <varname>buildBowerComponents</varname> is an input to the frontend build.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-2">
<para>
Whether to symlink or copy the <filename>bower_components</filename>
directory depends on the build tool in use. In this case a copy is used
to avoid <command>gulp</command> silliness with permissions.
Whether to symlink or copy the <filename>bower_components</filename> directory depends on the build tool in use. In this case a copy is used to avoid <command>gulp</command> silliness with permissions.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-3">
<para>
<command>gulp</command> requires <varname>HOME</varname> to refer to a
writeable directory.
<command>gulp</command> requires <varname>HOME</varname> to refer to a writeable directory.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-4">
@ -210,17 +181,13 @@ pkgs.stdenv.mkDerivation {
</term>
<listitem>
<para>
This means that Bower was looking for a package version which doesn't
exist in the generated <filename>bower-packages.nix</filename>.
This means that Bower was looking for a package version which doesn't exist in the generated <filename>bower-packages.nix</filename>.
</para>
<para>
If <filename>bower.json</filename> has been updated, then run
<command>bower2nix</command> again.
If <filename>bower.json</filename> has been updated, then run <command>bower2nix</command> again.
</para>
<para>
It could also be a bug in <command>bower2nix</command> or
<command>fetchbower</command>. If possible, try reformulating the version
specification in <filename>bower.json</filename>.
It could also be a bug in <command>bower2nix</command> or <command>fetchbower</command>. If possible, try reformulating the version specification in <filename>bower.json</filename>.
</para>
</listitem>
</varlistentry>

View File

@ -4,31 +4,19 @@
<title>Coq</title>
<para>
Coq libraries should be installed in
<literal>$(out)/lib/coq/${coq.coq-version}/user-contrib/</literal>. Such
directories are automatically added to the <literal>$COQPATH</literal>
environment variable by the hook defined in the Coq derivation.
Coq libraries should be installed in <literal>$(out)/lib/coq/${coq.coq-version}/user-contrib/</literal>. Such directories are automatically added to the <literal>$COQPATH</literal> environment variable by the hook defined in the Coq derivation.
</para>
<para>
Some extensions (plugins) might require OCaml and sometimes other OCaml
packages. The <literal>coq.ocamlPackages</literal> attribute can be used to
depend on the same package set Coq was built against.
Some extensions (plugins) might require OCaml and sometimes other OCaml packages. The <literal>coq.ocamlPackages</literal> attribute can be used to depend on the same package set Coq was built against.
</para>
<para>
Coq libraries may be compatible with some specific versions of Coq only. The
<literal>compatibleCoqVersions</literal> attribute is used to precisely
select those versions of Coq that are compatible with this derivation.
Coq libraries may be compatible with some specific versions of Coq only. The <literal>compatibleCoqVersions</literal> attribute is used to precisely select those versions of Coq that are compatible with this derivation.
</para>
<para>
Here is a simple package example. It is a pure Coq library, thus it depends
on Coq. It builds on the Mathematical Components library, thus it also takes
<literal>mathcomp</literal> as <literal>buildInputs</literal>. Its
<literal>Makefile</literal> has been generated using
<literal>coq_makefile</literal> so we only have to set the
<literal>$COQLIB</literal> variable at install time.
Here is a simple package example. It is a pure Coq library, thus it depends on Coq. It builds on the Mathematical Components library, thus it also takes <literal>mathcomp</literal> as <literal>buildInputs</literal>. Its <literal>Makefile</literal> has been generated using <literal>coq_makefile</literal> so we only have to set the <literal>$COQLIB</literal> variable at install time.
</para>
<programlisting>

View File

@ -0,0 +1,75 @@
# Dotnet
## Local Development Workflow
For local development, it's recommended to use nix-shell to create a dotnet environment:
```
# shell.nix
with import <nixpkgs> {};
mkShell {
name = "dotnet-env";
buildInputs = [
dotnet-sdk_3
];
}
```
### Using many sdks in a workflow
It's very likely that more than one sdk will be needed on a given project. Dotnet provides several different frameworks (E.g dotnetcore, aspnetcore, etc.) as well as many versions for a given framework. Normally, dotnet is able to fetch a framework and install it relative to the executable. However, this would mean writing to the nix store in nixpkgs, which is read-only. To support the many-sdk use case, one can compose an environment using `dotnetCorePackages.combinePackages`:
```
with import <nixpkgs> {};
mkShell {
name = "dotnet-env";
buildInputs = [
(with dotnetCorePackages; combinePackages [
sdk_3_1
sdk_3_0
sdk_2_1
])
];
}
```
This will produce a dotnet installation that has the dotnet 3.1, 3.0, and 2.1 sdk. The first sdk listed will have it's cli utility present in the resulting environment. Example info output:
```
$ dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 3.1.101
Commit: b377529961
...
.NET Core SDKs installed:
2.1.803 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/sdk]
3.0.102 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/sdk]
3.1.101 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.15 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.15 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.2 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.1 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.15 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.2 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.1 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.NETCore.App]
```
## dotnet-sdk vs dotnetCorePackages.sdk
The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project.
## dotnetCorePackages.sdk vs dotnetCorePackages.netcore vs dotnetCorePackages.aspnetcore
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `netcore` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications.
## Packaging a Dotnet Application
Ideally, we would like to build against the sdk, then only have the dotnet runtime available in the runtime closure.
TODO: Create closure-friendly way to package dotnet applications

View File

@ -1,4 +1,4 @@
# User's Guide to Emscripten in Nixpkgs
# Emscripten
[Emscripten](https://github.com/kripken/emscripten): An LLVM-to-JavaScript Compiler

View File

@ -0,0 +1,282 @@
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-language-gnome">
<title>GNOME</title>
<section xml:id="ssec-gnome-packaging">
<title>Packaging GNOME applications</title>
<para>
Programs in the GNOME universe are written in various languages but they all use GObject-based libraries like GLib, GTK or GStreamer. These libraries are often modular, relying on looking into certain directories to find their modules. However, due to Nixs specific file system organization, this will fail without our intervention. Fortunately, the libraries usually allow overriding the directories through environment variables, either natively or thanks to a patch in nixpkgs. <link xlink:href="#fun-wrapProgram">Wrapping</link> the executables to ensure correct paths are available to the application constitutes a significant part of packaging a modern desktop application. In this section, we will describe various modules needed by such applications, environment variables needed to make the modules load, and finally a script that will do the work for us.
</para>
<section xml:id="ssec-gnome-settings">
<title>Settings</title>
<para>
<link xlink:href="https://developer.gnome.org/gio/stable/GSettings.html">GSettings</link> API is often used for storing settings. GSettings schemas are required, to know the type and other metadata of the stored values. GLib looks for <filename>glib-2.0/schemas/gschemas.compiled</filename> files inside the directories of <envar>XDG_DATA_DIRS</envar>.
</para>
<para>
On Linux, GSettings API is implemented using <link xlink:href="https://wiki.gnome.org/Projects/dconf">dconf</link> backend. You will need to add <literal>dconf</literal> GIO module to <envar>GIO_EXTRA_MODULES</envar> variable, otherwise the <literal>memory</literal> backend will be used and the saved settings will not be persistent.
</para>
<para>
Last you will need the dconf database D-Bus service itself. You can enable it using <option>programs.dconf.enable</option>.
</para>
<para>
Some applications will also require <package>gsettings-desktop-schemas</package> for things like reading proxy configuration or user interface customization. This dependency is often not mentioned by upstream, you should grep for <literal>org.gnome.desktop</literal> and <literal>org.gnome.system</literal> to see if the schemas are needed.
</para>
</section>
<section xml:id="ssec-gnome-icons">
<title>Icons</title>
<para>
When an application uses icons, an icon theme should be available in <envar>XDG_DATA_DIRS</envar> during runtime. The package for the default, icon-less <link xlink:href="https://www.freedesktop.org/wiki/Software/icon-theme/">hicolor-icon-theme</link> (should be propagated by every icon theme) contains <link linkend="ssec-gnome-hooks-hicolor-icon-theme">a setup hook</link> that will pick up icon themes from <literal>buildInputs</literal> and pass it to our wrapper. Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
</para>
<para>
To avoid costly file system access when locating icons, GTK, <link xlink:href="https://woboq.com/blog/qicon-reads-gtk-icon-cache-in-qt57.html">as well as Qt</link>, can rely on <filename>icon-theme.cache</filename> files from the themes top-level directories. These files are generated using <command>gtk-update-icon-cache</command>, which is expected to be run whenever an icon is added or removed to an icon theme (typically an application icon into <literal>hicolor</literal> theme) and some programs do indeed run this after icon installation. However, since packages are installed into their own prefix by Nix, this would lead to conflicts. For that reason, <package>gtk3</package> provides a <link xlink:href="#ssec-gnome-hooks-gtk-drop-icon-theme-cache">setup hook</link> that will clean the file from installation. Since most applications only ship their own icon that will be loaded on start-up, it should not affect them too much. On the other hand, icon themes are much larger and more widely used so we need to cache them. Because we recommend installing icon themes globally, we will generate the cache files from all packages in a profile using a NixOS module. You can enable the cache generation using <option>gtk.iconCache.enable</option> option if your desktop environment does not already do that.
</para>
</section>
<section xml:id="ssec-gnome-themes">
<title>GTK Themes</title>
<para>
Previously, a GTK theme needed to be in <envar>XDG_DATA_DIRS</envar>. This is no longer necessary for most programs since GTK incorporated Adwaita theme. Some programs (for example, those designed for <link xlink:href="https://elementary.io/docs/human-interface-guidelines#human-interface-guidelines">elementary HIG</link>) might require a special theme like <package>pantheon.elementary-gtk-theme</package>.
</para>
</section>
<section xml:id="ssec-gnome-typelibs">
<title>GObject introspection typelibs</title>
<para>
<link xlink:href="https://wiki.gnome.org/Projects/GObjectIntrospection">GObject introspection</link> allows applications to use C libraries in other languages easily. It does this through <literal>typelib</literal> files searched in <envar>GI_TYPELIB_PATH</envar>.
</para>
</section>
<section xml:id="ssec-gnome-plugins">
<title>Various plug-ins</title>
<para>
If your application uses <link xlink:href="https://gstreamer.freedesktop.org/">GStreamer</link> or <link xlink:href="https://wiki.gnome.org/Projects/Grilo">Grilo</link>, you should set <envar>GST_PLUGIN_SYSTEM_PATH_1_0</envar> and <envar>GRL_PLUGIN_PATH</envar>, respectively.
</para>
</section>
</section>
<section xml:id="ssec-gnome-hooks">
<title>Onto <package>wrapGAppsHook</package></title>
<para>
Given the requirements above, the package expression would become messy quickly:
<programlisting>
preFixup = ''
for f in $(find $out/bin/ $out/libexec/ -type f -executable); do
wrapProgram "$f" \
--prefix GIO_EXTRA_MODULES : "${getLib dconf}/lib/gio/modules" \
--prefix XDG_DATA_DIRS : "$out/share" \
--prefix XDG_DATA_DIRS : "$out/share/gsettings-schemas/${name}" \
--prefix XDG_DATA_DIRS : "${gsettings-desktop-schemas}/share/gsettings-schemas/${gsettings-desktop-schemas.name}" \
--prefix XDG_DATA_DIRS : "${hicolor-icon-theme}/share" \
--prefix GI_TYPELIB_PATH : "${lib.makeSearchPath "lib/girepository-1.0" [ pango json-glib ]}"
done
'';
</programlisting>
Fortunately, there is <package>wrapGAppsHook</package>, that does the wrapping for us. In particular, it works in conjunction with other setup hooks that will populate the variable:
<itemizedlist>
<listitem xml:id="ssec-gnome-hooks-wrapgappshook">
<para>
<package>wrapGAppsHook</package> itself will add the packages <filename>share</filename> directory to <envar>XDG_DATA_DIRS</envar>.
</para>
</listitem>
<listitem xml:id="ssec-gnome-hooks-glib">
<para>
<package>glib</package> setup hook will populate <envar>GSETTINGS_SCHEMAS_PATH</envar> and then <package>wrapGAppsHook</package> will prepend it to <envar>XDG_DATA_DIRS</envar>.
</para>
</listitem>
<listitem xml:id="ssec-gnome-hooks-gtk-drop-icon-theme-cache">
<para>
One of <package>gtk3</package>s setup hooks will remove <filename>icon-theme.cache</filename> files from packages icon theme directories to avoid conflicts. Icon theme packages should prevent this with <code>dontDropIconThemeCache = true;</code>.
</para>
</listitem>
<listitem xml:id="ssec-gnome-hooks-dconf">
<para>
<package>dconf.lib</package> is a dependency of <package>wrapGAppsHook</package>, which then also adds it to the <envar>GIO_EXTRA_MODULES</envar> variable.
</para>
</listitem>
<listitem xml:id="ssec-gnome-hooks-hicolor-icon-theme">
<para>
<package>hicolor-icon-theme</package>s setup hook will add icon themes to <envar>XDG_ICON_DIRS</envar> which is prepended to <envar>XDG_DATA_DIRS</envar> by <package>wrapGAppsHook</package>.
</para>
</listitem>
<listitem xml:id="ssec-gnome-hooks-gobject-introspection">
<para>
<package>gobject-introspection</package> setup hook populates <envar>GI_TYPELIB_PATH</envar> variable with <filename>lib/girepository-1.0</filename> directories of dependencies, which is then added to wrapper by <package>wrapGAppsHook</package>. It also adds <filename>share</filename> directories of dependencies to <envar>XDG_DATA_DIRS</envar>, which is intended to promote GIR files but it also <link xlink:href="https://github.com/NixOS/nixpkgs/issues/32790">pollutes the closures</link> of packages using <package>wrapGAppsHook</package>.
</para>
<warning>
<para>
The setup hook <link xlink:href="https://github.com/NixOS/nixpkgs/issues/56943">currently</link> does not work in expressions with <literal>strictDeps</literal> enabled, like Python packages. In those cases, you will need to disable it with <code>strictDeps = false;</code>.
</para>
</warning>
</listitem>
<listitem xml:id="ssec-gnome-hooks-gst-grl-plugins">
<para>
Setup hooks of <package>gst_all_1.gstreamer</package> and <package>gnome3.grilo</package> will populate the <envar>GST_PLUGIN_SYSTEM_PATH_1_0</envar> and <envar>GRL_PLUGIN_PATH</envar> variables, respectively, which will then be added to the wrapper by <literal>wrapGAppsHook</literal>.
</para>
</listitem>
</itemizedlist>
</para>
<para>
You can also pass additional arguments to <literal>makeWrapper</literal> using <literal>gappsWrapperArgs</literal> in <literal>preFixup</literal> hook:
<programlisting>
preFixup = ''
gappsWrapperArgs+=(
# Thumbnailers
--prefix XDG_DATA_DIRS : "${gdk-pixbuf}/share"
--prefix XDG_DATA_DIRS : "${librsvg}/share"
--prefix XDG_DATA_DIRS : "${shared-mime-info}/share"
)
'';
</programlisting>
</para>
</section>
<section xml:id="ssec-gnome-updating">
<title>Updating GNOME packages</title>
<para>
Most GNOME package offer <link linkend="var-passthru-updateScript"><literal>updateScript</literal></link>, it is therefore possible to update to latest source tarball by running <command>nix-shell maintainers/scripts/update.nix --argstr package gnome3.nautilus</command> or even en masse with <command>nix-shell maintainers/scripts/update.nix --argstr path gnome3</command>. Read the packages <filename>NEWS</filename> file to see what changed.
</para>
</section>
<section xml:id="ssec-gnome-common-issues">
<title>Frequently encountered issues</title>
<variablelist>
<varlistentry xml:id="ssec-gnome-common-issues-no-schemas">
<term>
<computeroutput>GLib-GIO-ERROR **: <replaceable>06:04:50.903</replaceable>: No GSettings schemas are installed on the system</computeroutput>
</term>
<listitem>
<para>
There are no schemas avalable in <envar>XDG_DATA_DIRS</envar>. Temporarily add a random package containing schemas like <package>gsettings-desktop-schemas</package> to <literal>buildInputs</literal>. <link linkend="ssec-gnome-hooks-glib"><package>glib</package></link> and <link linkend="ssec-gnome-hooks-wrapgappshook"><package>wrapGAppsHook</package></link> setup hooks will take care of making the schemas available to application and you will see the actual missing schemas with the <link linkend="ssec-gnome-common-issues-missing-schema">next error</link>. Or you can try looking through the source code for the actual schemas used.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="ssec-gnome-common-issues-missing-schema">
<term>
<computeroutput>GLib-GIO-ERROR **: <replaceable>06:04:50.903</replaceable>: Settings schema <replaceable>org.gnome.foo</replaceable> is not installed</computeroutput>
</term>
<listitem>
<para>
Package is missing some GSettings schemas. You can find out the package containing the schema with <command>nix-locate <replaceable>org.gnome.foo</replaceable>.gschema.xml</command> and let the hooks handle the wrapping as <link linkend="ssec-gnome-common-issues-no-schemas">above</link>.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="ssec-gnome-common-issues-double-wrapped">
<term>
When using <package>wrapGAppsHook</package> with special derivers you can end up with double wrapped binaries.
</term>
<listitem>
<para>
This is because derivers like <function>python.pkgs.buildPythonApplication</function> or <function>qt5.mkDerivation</function> have setup-hooks automatically added that produce wrappers with <package>makeWrapper</package>. The simplest way to workaround that is to disable the <package>wrapGAppsHook</package> automatic wrapping with <code>dontWrapGApps = true;</code> and pass the arguments it intended to pass to <package>makeWrapper</package> to another.
</para>
<para>
In the case of a Python application it could look like:
<programlisting>
python3.pkgs.buildPythonApplication {
pname = "gnome-music";
version = "3.32.2";
nativeBuildInputs = [
wrapGAppsHook
gobject-introspection
...
];
dontWrapGApps = true;
# Arguments to be passed to `makeWrapper`, only used by buildPython*
preFixup = ''
makeWrapperArgs+=("''${gappsWrapperArgs[@]}")
'';
}
</programlisting>
And for a QT app like:
<programlisting>
mkDerivation {
pname = "calibre";
version = "3.47.0";
nativeBuildInputs = [
wrapGAppsHook
qmake
...
];
dontWrapGApps = true;
# Arguments to be passed to `makeWrapper`, only used by qt5s mkDerivation
preFixup = ''
qtWrapperArgs+=("''${gappsWrapperArgs[@]}")
'';
}
</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="ssec-gnome-common-issues-unwrappable-package">
<term>
I am packaging a project that cannot be wrapped, like a library or GNOME Shell extension.
</term>
<listitem>
<para>
You can rely on applications depending on the library set the necessary environment variables but that it often easy to miss. Instead we recommend to patch the paths in the source code whenever possible. Here are some examples:
<itemizedlist>
<listitem xml:id="ssec-gnome-common-issues-unwrappable-package-gnome-shell-ext">
<para>
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/7bb8f05f12ca3cff9da72b56caa2f7472d5732bc/pkgs/desktops/gnome-3/core/gnome-shell-extensions/default.nix#L21-L24">Replacing a <envar>GI_TYPELIB_PATH</envar> in GNOME Shell extension</link> we are using <function>substituteAll</function> to include the path to a typelib into a patch.
</para>
</listitem>
<listitem xml:id="ssec-gnome-common-issues-unwrappable-package-gsettings">
<para>
The following examples are hardcoding GSettings schema paths. To get the schema paths we use the functions
<itemizedlist>
<listitem>
<para>
<function>glib.getSchemaPath</function> Takes a nix package attribute as an argument.
</para>
</listitem>
<listitem>
<para>
<function>glib.makeSchemaPath</function> Takes a package output like <literal>$out</literal> and a derivation name. You should use this if the schemas you need to hardcode are in the same derivation.
</para>
</listitem>
</itemizedlist>
</para>
<para xml:id="ssec-gnome-common-issues-unwrappable-package-gsettings-vala">
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/7bb8f05f12ca3cff9da72b56caa2f7472d5732bc/pkgs/desktops/pantheon/apps/elementary-files/default.nix#L78-L86">Hard-coding GSettings schema path in Vala plug-in (dynamically loaded library)</link> here, <function>substituteAll</function> cannot be used since the schema comes from the same package preventing us from pass its path to the function, probably due to a <link xlink:href="https://github.com/NixOS/nix/issues/1846">Nix bug</link>.
</para>
<para xml:id="ssec-gnome-common-issues-unwrappable-package-gsettings-c">
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/29c120c065d03b000224872251bed93932d42412/pkgs/development/libraries/glib-networking/default.nix#L31-L34">Hard-coding GSettings schema path in C library</link> nothing special other than using <link xlink:href="https://github.com/NixOS/nixpkgs/pull/67957#issuecomment-527717467">Coccinelle patch</link> to generate the patch itself.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="ssec-gnome-common-issues-weird-location">
<term>
I need to wrap a binary outside <filename>bin</filename> and <filename>libexec</filename> directories.
</term>
<listitem>
<para>
You can manually trigger the wrapping with <function>wrapGApp</function> in <literal>preFixup</literal> phase. It takes a path to a program as a first argument; the remaining arguments are passed directly to <function xlink:href="#fun-wrapProgram">wrapProgram</function> function.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View File

@ -7,21 +7,16 @@
<title>Go modules</title>
<para>
The function <varname> buildGoModule </varname> builds Go programs managed
with Go modules. It builds a
<link xlink:href="https://github.com/golang/go/wiki/Modules">Go
modules</link> through a two phase build:
The function <varname> buildGoModule </varname> builds Go programs managed with Go modules. It builds a <link xlink:href="https://github.com/golang/go/wiki/Modules">Go modules</link> through a two phase build:
<itemizedlist>
<listitem>
<para>
An intermediate fetcher derivation. This derivation will be used to fetch
all of the dependencies of the Go module.
An intermediate fetcher derivation. This derivation will be used to fetch all of the dependencies of the Go module.
</para>
</listitem>
<listitem>
<para>
A final derivation will use the output of the intermediate derivation to
build the binaries and produce the final output.
A final derivation will use the output of the intermediate derivation to build the binaries and produce the final output.
</para>
</listitem>
</itemizedlist>
@ -31,7 +26,7 @@
<title>buildGoModule</title>
<programlisting>
pet = buildGoModule rec {
name = "pet-${version}";
pname = "pet";
version = "0.3.4";
src = fetchFromGitHub {
@ -47,7 +42,7 @@ pet = buildGoModule rec {
meta = with lib; {
description = "Simple command-line snippet manager, written in Go";
homepage = https://github.com/knqyf263/pet;
homepage = "https://github.com/knqyf263/pet";
license = licenses.mit;
maintainers = with maintainers; [ kalbasit ];
platforms = platforms.linux ++ platforms.darwin;
@ -57,40 +52,43 @@ pet = buildGoModule rec {
</example>
<para>
<xref linkend='ex-buildGoModule'/> is an example expression using
buildGoModule, the following arguments are of special significance to the
function:
<xref linkend='ex-buildGoModule'/> is an example expression using buildGoModule, the following arguments are of special significance to the function:
<calloutlist>
<callout arearefs='ex-buildGoModule-1'>
<para>
<varname>modSha256</varname> is the hash of the output of the
intermediate fetcher derivation.
<varname>modSha256</varname> is the hash of the output of the intermediate fetcher derivation.
</para>
</callout>
<callout arearefs='ex-buildGoModule-2'>
<para>
<varname>subPackages</varname> limits the builder from building child
packages that have not been listed. If <varname>subPackages</varname> is
not specified, all child packages will be built.
<varname>subPackages</varname> limits the builder from building child packages that have not been listed. If <varname>subPackages</varname> is not specified, all child packages will be built.
</para>
</callout>
</calloutlist>
</para>
<para>
<varname>modSha256</varname> can also take <varname>null</varname> as an input.
When `null` is used as a value, the derivation won't be a
fixed-output derivation but disable the build sandbox instead. This can be useful outside
of nixpkgs where re-generating the modSha256 on each mod.sum changes is cumbersome,
but will fail to build by Hydra, as builds with a disabled sandbox are discouraged.
</para>
</section>
<section xml:id="ssec-go-legacy">
<title>Go legacy</title>
<para>
The function <varname> buildGoPackage </varname> builds legacy Go programs,
not supporting Go modules.
The function <varname> buildGoPackage </varname> builds legacy Go programs, not supporting Go modules.
</para>
<example xml:id='ex-buildGoPackage'>
<title>buildGoPackage</title>
<programlisting>
deis = buildGoPackage rec {
name = "deis-${version}";
pname = "deis";
version = "1.13.0";
goPackagePath = "github.com/deis/deis"; <co xml:id='ex-buildGoPackage-1' />
@ -105,55 +103,42 @@ deis = buildGoPackage rec {
goDeps = ./deps.nix; <co xml:id='ex-buildGoPackage-3' />
buildFlags = "--tags release"; <co xml:id='ex-buildGoPackage-4' />
buildFlags = [ "--tags" "release" ]; <co xml:id='ex-buildGoPackage-4' />
}
</programlisting>
</example>
<para>
<xref linkend='ex-buildGoPackage'/> is an example expression using
buildGoPackage, the following arguments are of special significance to the
function:
<xref linkend='ex-buildGoPackage'/> is an example expression using buildGoPackage, the following arguments are of special significance to the function:
<calloutlist>
<callout arearefs='ex-buildGoPackage-1'>
<para>
<varname>goPackagePath</varname> specifies the package's canonical Go
import path.
<varname>goPackagePath</varname> specifies the package's canonical Go import path.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-2'>
<para>
<varname>subPackages</varname> limits the builder from building child
packages that have not been listed. If <varname>subPackages</varname> is
not specified, all child packages will be built.
<varname>subPackages</varname> limits the builder from building child packages that have not been listed. If <varname>subPackages</varname> is not specified, all child packages will be built.
</para>
<para>
In this example only <literal>github.com/deis/deis/client</literal> will
be built.
In this example only <literal>github.com/deis/deis/client</literal> will be built.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-3'>
<para>
<varname>goDeps</varname> is where the Go dependencies of a Go program
are listed as a list of package source identified by Go import path. It
could be imported as a separate <varname>deps.nix</varname> file for
readability. The dependency data structure is described below.
<varname>goDeps</varname> is where the Go dependencies of a Go program are listed as a list of package source identified by Go import path. It could be imported as a separate <varname>deps.nix</varname> file for readability. The dependency data structure is described below.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-4'>
<para>
<varname>buildFlags</varname> is a list of flags passed to the go build
command.
<varname>buildFlags</varname> is a list of flags passed to the go build command.
</para>
</callout>
</calloutlist>
</para>
<para>
The <varname>goDeps</varname> attribute can be imported from a separate
<varname>nix</varname> file that defines which Go libraries are needed and
should be included in <varname>GOPATH</varname> for
<varname>buildPhase</varname>.
The <varname>goDeps</varname> attribute can be imported from a separate <varname>nix</varname> file that defines which Go libraries are needed and should be included in <varname>GOPATH</varname> for <varname>buildPhase</varname>.
</para>
<example xml:id='ex-goDeps'>
@ -196,27 +181,18 @@ deis = buildGoPackage rec {
</callout>
<callout arearefs='ex-goDeps-3'>
<para>
<varname>fetch type</varname> that needs to be used to get package
source. If <varname>git</varname> is used there should be
<varname>url</varname>, <varname>rev</varname> and
<varname>sha256</varname> defined next to it.
<varname>fetch type</varname> that needs to be used to get package source. If <varname>git</varname> is used there should be <varname>url</varname>, <varname>rev</varname> and <varname>sha256</varname> defined next to it.
</para>
</callout>
</calloutlist>
</para>
<para>
To extract dependency information from a Go package in automated way use
<link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can
produce complete derivation and <varname>goDeps</varname> file for Go
programs.
To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can produce complete derivation and <varname>goDeps</varname> file for Go programs.
</para>
<para>
<varname>buildGoPackage</varname> produces
<xref linkend='chap-multiple-output' xrefstyle="select: title" /> where
<varname>bin</varname> includes program binaries. You can test build a Go
binary as follows:
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" /> where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
<screen>
<prompt>$ </prompt>nix-build -A deis.bin
</screen>
@ -224,13 +200,11 @@ deis = buildGoPackage rec {
<screen>
<prompt>$ </prompt>nix-build -A deis.all
</screen>
<varname>bin</varname> output will be installed by default with
<varname>nix-env -i</varname> or <varname>systemPackages</varname>.
<varname>bin</varname> output will be installed by default with <varname>nix-env -i</varname> or <varname>systemPackages</varname>.
</para>
<para>
You may use Go packages installed into the active Nix profiles by adding the
following to your ~/.bashrc:
You may use Go packages installed into the active Nix profiles by adding the following to your ~/.bashrc:
<screen>
for p in $NIX_PROFILES; do
GOPATH="$p/share/go:$GOPATH"

View File

@ -3,7 +3,7 @@ title: User's Guide for Haskell in Nixpkgs
author: Peter Simons
date: 2015-06-01
---
# User's Guide to the Haskell Infrastructure
# Haskell
## How to install Haskell packages
@ -25,14 +25,14 @@ avoided that by keeping all Haskell-related packages in a separate attribute
set called `haskellPackages`, which the following command will list:
```
$ nix-env -f "<nixpkgs>" -qaP -A haskellPackages
haskellPackages.a50 a50-0.5
haskellPackages.abacate haskell-abacate-0.0.0.0
haskellPackages.abcBridge haskell-abcBridge-0.12
haskellPackages.afv afv-0.1.1
haskellPackages.alex alex-3.1.4
haskellPackages.Allure Allure-0.4.101.1
haskellPackages.alms alms-0.6.7
[... some 8000 entries omitted ...]
haskellPackages.a50 a50-0.5
haskellPackages.AAI AAI-0.2.0.1
haskellPackages.abacate abacate-0.0.0.0
haskellPackages.abc-puzzle abc-puzzle-0.2.1
haskellPackages.abcBridge abcBridge-0.15
haskellPackages.abcnotation abcnotation-1.9.0
haskellPackages.abeson abeson-0.1.0.1
[... some 14000 entries omitted ...]
```
To install any of those packages into your profile, refer to them by their
@ -84,36 +84,37 @@ nix-env -qaP -A nixos.haskellPackages
nix-env -iA nixos.haskellPackages.cabal-install
```
Our current default compiler is GHC 7.10.x and the `haskellPackages` set
contains packages built with that particular version. Nixpkgs contains the
latest major release of every GHC since 6.10.4, however, and there is a whole
family of package sets available that defines Hackage packages built with each
of those compilers, too:
Our current default compiler is GHC 8.6.x and the `haskellPackages` set
contains packages built with that particular version. Nixpkgs contains the last
three major releases of GHC and there is a whole family of package sets
available that defines Hackage packages built with each of those compilers,
too:
```shell
nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc6123
nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc763
nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc844
nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc882
```
The name `haskellPackages` is really just a synonym for
`haskell.packages.ghc7102`, because we prefer that package set internally and
`haskell.packages.ghc865`, because we prefer that package set internally and
recommend it to our users as their default choice, but ultimately you are free
to compile your Haskell packages with any GHC version you please. The following
command displays the complete list of available compilers:
```
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler
haskell.compiler.ghc6104 ghc-6.10.4
haskell.compiler.ghc6123 ghc-6.12.3
haskell.compiler.ghc704 ghc-7.0.4
haskell.compiler.ghc722 ghc-7.2.2
haskell.compiler.ghc742 ghc-7.4.2
haskell.compiler.ghc763 ghc-7.6.3
haskell.compiler.ghc784 ghc-7.8.4
haskell.compiler.ghc7102 ghc-7.10.2
haskell.compiler.ghcHEAD ghc-7.11.20150402
haskell.compiler.ghcNokinds ghc-nokinds-7.11.20150704
haskell.compiler.ghcjs ghcjs-0.1.0
haskell.compiler.jhc jhc-0.8.2
haskell.compiler.uhc uhc-1.1.9.0
haskell.compiler.ghc8101 ghc-8.10.0.20191210
haskell.compiler.integer-simple.ghc8101 ghc-8.10.0.20191210
haskell.compiler.ghcHEAD ghc-8.10.20191119
haskell.compiler.integer-simple.ghcHEAD ghc-8.10.20191119
haskell.compiler.ghc822Binary ghc-8.2.2-binary
haskell.compiler.ghc844 ghc-8.4.4
haskell.compiler.ghc863Binary ghc-8.6.3-binary
haskell.compiler.ghc865 ghc-8.6.5
haskell.compiler.integer-simple.ghc865 ghc-8.6.5
haskell.compiler.ghc881 ghc-8.8.1
haskell.compiler.integer-simple.ghc881 ghc-8.8.1
haskell.compiler.ghc882 ghc-8.8.1.20191211
haskell.compiler.integer-simple.ghc882 ghc-8.8.1.20191211
haskell.compiler.ghcjs ghcjs-8.6.0.1
```
We have no package sets for `jhc` or `uhc` yet, unfortunately, but for every
@ -398,7 +399,9 @@ nix:
For more on how to write a `shell.nix` file see the below section. You'll need
to express a derivation. Note that Nixpkgs ships with a convenience wrapper
function around `mkDerivation` called `haskell.lib.buildStackProject` to help you
create this derivation in exactly the way Stack expects. All of the same inputs
create this derivation in exactly the way Stack expects. However for this to work
you need to disable the sandbox, which you can do by using `--option sandbox relaxed`
or `--option sandbox false` to the Nix command. All of the same inputs
as `mkDerivation` can be provided. For example, to build a Stack project that
including packages that link against a version of the R library compiled with
special options turned on:

View File

@ -1,4 +1,4 @@
# Idris packages
# Idris
## Installing Idris
@ -96,7 +96,7 @@ build-idris-package {
meta = {
description = "Idris YAML lib";
homepage = https://github.com/Heather/Idris.Yaml;
homepage = "https://github.com/Heather/Idris.Yaml";
license = lib.licenses.mit;
maintainers = [ lib.maintainers.brainrape ];
};

View File

@ -1,19 +1,17 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-language-support">
<title>Support for specific programming languages and frameworks</title>
<title>Languages and frameworks</title>
<para>
The <link linkend="chap-stdenv">standard build environment</link> makes it
easy to build typical Autotools-based packages with very little code. Any
other kind of package can be accomodated by overriding the appropriate phases
of <literal>stdenv</literal>. However, there are specialised functions in
Nixpkgs to easily build packages for other programming languages, such as
Perl or Haskell. These are described in this chapter.
The <link linkend="chap-stdenv">standard build environment</link> makes it easy to build typical Autotools-based packages with very little code. Any other kind of package can be accomodated by overriding the appropriate phases of <literal>stdenv</literal>. However, there are specialised functions in Nixpkgs to easily build packages for other programming languages, such as Perl or Haskell. These are described in this chapter.
</para>
<xi:include href="android.section.xml" />
<xi:include href="beam.xml" />
<xi:include href="bower.xml" />
<xi:include href="coq.xml" />
<xi:include href="crystal.section.xml" />
<xi:include href="emscripten.section.xml" />
<xi:include href="gnome.xml" />
<xi:include href="go.xml" />
<xi:include href="haskell.section.xml" />
<xi:include href="idris.section.xml" />
@ -31,6 +29,4 @@
<xi:include href="texlive.xml" />
<xi:include href="titanium.section.xml" />
<xi:include href="vim.section.xml" />
<xi:include href="emscripten.section.xml" />
<xi:include href="crystal.section.xml" />
</chapter>

View File

@ -1,7 +1,7 @@
---
title: iOS
author: Sander van der Burg
date: 2018-11-18
date: 2019-11-10
---
# iOS
@ -217,3 +217,13 @@ xcode.simulateApp {
By providing the result of an `xcode.buildApp {}` function and configuring the
app bundle id, the app gets deployed automatically and started.
Troubleshooting
---------------
In some rare cases, it may happen that after a failure, changes are not picked
up. Most likely, this is caused by a derived data cache that Xcode maintains.
To wipe it you can run:
```bash
$ rm -rf ~/Library/Developer/Xcode/DerivedData
```

View File

@ -15,37 +15,24 @@ stdenv.mkDerivation {
buildPhase = "ant";
}
</programlisting>
Note that <varname>jdk</varname> is an alias for the OpenJDK (self-built
where available, or pre-built via Zulu). Platforms with OpenJDK not (yet) in
Nixpkgs (<literal>Aarch32</literal>, <literal>Aarch64</literal>) point to the
(unfree) <literal>oraclejdk</literal>.
Note that <varname>jdk</varname> is an alias for the OpenJDK (self-built where available, or pre-built via Zulu). Platforms with OpenJDK not (yet) in Nixpkgs (<literal>Aarch32</literal>, <literal>Aarch64</literal>) point to the (unfree) <literal>oraclejdk</literal>.
</para>
<para>
JAR files that are intended to be used by other packages should be installed
in <filename>$out/share/java</filename>. JDKs have a stdenv setup hook that
add any JARs in the <filename>share/java</filename> directories of the build
inputs to the <envar>CLASSPATH</envar> environment variable. For instance, if
the package <literal>libfoo</literal> installs a JAR named
<filename>foo.jar</filename> in its <filename>share/java</filename>
directory, and another package declares the attribute
JAR files that are intended to be used by other packages should be installed in <filename>$out/share/java</filename>. JDKs have a stdenv setup hook that add any JARs in the <filename>share/java</filename> directories of the build inputs to the <envar>CLASSPATH</envar> environment variable. For instance, if the package <literal>libfoo</literal> installs a JAR named <filename>foo.jar</filename> in its <filename>share/java</filename> directory, and another package declares the attribute
<programlisting>
buildInputs = [ libfoo ];
nativeBuildInputs = [ jdk ];
</programlisting>
then <envar>CLASSPATH</envar> will be set to
<filename>/nix/store/...-libfoo/share/java/foo.jar</filename>.
then <envar>CLASSPATH</envar> will be set to <filename>/nix/store/...-libfoo/share/java/foo.jar</filename>.
</para>
<para>
Private JARs should be installed in a location like
<filename>$out/share/<replaceable>package-name</replaceable></filename>.
Private JARs should be installed in a location like <filename>$out/share/<replaceable>package-name</replaceable></filename>.
</para>
<para>
If your Java package provides a program, you need to generate a wrapper
script to run it using the OpenJRE. You can use
<literal>makeWrapper</literal> for this:
If your Java package provides a program, you need to generate a wrapper script to run it using the OpenJRE. You can use <literal>makeWrapper</literal> for this:
<programlisting>
nativeBuildInputs = [ makeWrapper ];
@ -56,30 +43,21 @@ installPhase =
--add-flags "-cp $out/share/java/foo.jar org.foo.Main"
'';
</programlisting>
Note the use of <literal>jre</literal>, which is the part of the OpenJDK
package that contains the Java Runtime Environment. By using
<literal>${jre}/bin/java</literal> instead of
<literal>${jdk}/bin/java</literal>, you prevent your package from depending
on the JDK at runtime.
Note the use of <literal>jre</literal>, which is the part of the OpenJDK package that contains the Java Runtime Environment. By using <literal>${jre}/bin/java</literal> instead of <literal>${jdk}/bin/java</literal>, you prevent your package from depending on the JDK at runtime.
</para>
<para>
Note all JDKs passthru <literal>home</literal>, so if your application
requires environment variables like <envar>JAVA_HOME</envar> being set, that
can be done in a generic fashion with the <literal>--set</literal> argument
of <literal>makeWrapper</literal>:
Note all JDKs passthru <literal>home</literal>, so if your application requires environment variables like <envar>JAVA_HOME</envar> being set, that can be done in a generic fashion with the <literal>--set</literal> argument of <literal>makeWrapper</literal>:
<programlisting>
--set JAVA_HOME ${jdk.home}
</programlisting>
</para>
<para>
It is possible to use a different Java compiler than <command>javac</command>
from the OpenJDK. For instance, to use the GNU Java Compiler:
It is possible to use a different Java compiler than <command>javac</command> from the OpenJDK. For instance, to use the GNU Java Compiler:
<programlisting>
nativeBuildInputs = [ gcj ant ];
</programlisting>
Here, Ant will automatically use <command>gij</command> (the GNU Java
Runtime) instead of the OpenJRE.
Here, Ant will automatically use <command>gij</command> (the GNU Java Runtime) instead of the OpenJRE.
</para>
</section>

View File

@ -4,18 +4,11 @@
<title>Lua</title>
<para>
Lua packages are built by the <varname>buildLuaPackage</varname> function.
This function is implemented in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generic/default.nix">
<filename>pkgs/development/lua-modules/generic/default.nix</filename></link>
and works similarly to <varname>buildPerlPackage</varname>. (See
<xref linkend="sec-language-perl"/> for details.)
Lua packages are built by the <varname>buildLuaPackage</varname> function. This function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generic/default.nix"> <filename>pkgs/development/lua-modules/generic/default.nix</filename></link> and works similarly to <varname>buildPerlPackage</varname>. (See <xref linkend="sec-language-perl"/> for details.)
</para>
<para>
Lua packages are defined in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/lua-packages.nix"><filename>pkgs/top-level/lua-packages.nix</filename></link>.
Most of them are simple. For example:
Lua packages are defined in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/lua-packages.nix"><filename>pkgs/top-level/lua-packages.nix</filename></link>. Most of them are simple. For example:
<programlisting>
fileSystem = buildLuaPackage {
name = "filesystem-1.6.2";
@ -33,16 +26,11 @@ fileSystem = buildLuaPackage {
</para>
<para>
Though, more complicated package should be placed in a seperate file in
<link
Though, more complicated package should be placed in a seperate file in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules"><filename>pkgs/development/lua-modules</filename></link>.
</para>
<para>
Lua packages accept additional parameter <varname>disabled</varname>, which
defines the condition of disabling package from luaPackages. For example, if
package has <varname>disabled</varname> assigned to <literal>lua.luaversion
!= "5.1"</literal>, it will not be included in any luaPackages except
lua51Packages, making it only be built for lua 5.1.
Lua packages accept additional parameter <varname>disabled</varname>, which defines the condition of disabling package from luaPackages. For example, if package has <varname>disabled</varname> assigned to <literal>lua.luaversion != "5.1"</literal>, it will not be included in any luaPackages except lua51Packages, making it only be built for lua 5.1.
</para>
</section>

View File

@ -1,5 +1,5 @@
Node.js packages
================
Node.js
=======
The `pkgs/development/node-packages` folder contains a generated collection of
[NPM packages](https://npmjs.com/) that can be installed with the Nix package
manager.

View File

@ -4,35 +4,15 @@
<title>OCaml</title>
<para>
OCaml libraries should be installed in
<literal>$(out)/lib/ocaml/${ocaml.version}/site-lib/</literal>. Such
directories are automatically added to the <literal>$OCAMLPATH</literal>
environment variable when building another package that depends on them or
when opening a <literal>nix-shell</literal>.
OCaml libraries should be installed in <literal>$(out)/lib/ocaml/${ocaml.version}/site-lib/</literal>. Such directories are automatically added to the <literal>$OCAMLPATH</literal> environment variable when building another package that depends on them or when opening a <literal>nix-shell</literal>.
</para>
<para>
Given that most of the OCaml ecosystem is now built with dune, nixpkgs
includes a convenience build support function called
<literal>buildDunePackage</literal> that will build an OCaml package using
dune, OCaml and findlib and any additional dependencies provided as
<literal>buildInputs</literal> or <literal>propagatedBuildInputs</literal>.
Given that most of the OCaml ecosystem is now built with dune, nixpkgs includes a convenience build support function called <literal>buildDunePackage</literal> that will build an OCaml package using dune, OCaml and findlib and any additional dependencies provided as <literal>buildInputs</literal> or <literal>propagatedBuildInputs</literal>.
</para>
<para>
Here is a simple package example. It defines an (optional) attribute
<literal>minimumOCamlVersion</literal> that will be used to throw a
descriptive evaluation error if building with an older OCaml is attempted. It
uses the <literal>fetchFromGitHub</literal> fetcher to get its source. It
sets the <literal>doCheck</literal> (optional) attribute to
<literal>true</literal> which means that tests will be run with <literal>dune
runtest -p angstrom</literal> after the build (<literal>dune build -p
angstrom</literal>) is complete. It uses <literal>alcotest</literal> as a
build input (because it is needed to run the tests) and
<literal>bigstringaf</literal> and <literal>result</literal> as propagated
build inputs (thus they will also be available to libraries depending on this
library). The library will be installed using the
<literal>angstrom.install</literal> file that dune generates.
Here is a simple package example. It defines an (optional) attribute <literal>minimumOCamlVersion</literal> that will be used to throw a descriptive evaluation error if building with an older OCaml is attempted. It uses the <literal>fetchFromGitHub</literal> fetcher to get its source. It sets the <literal>doCheck</literal> (optional) attribute to <literal>true</literal> which means that tests will be run with <literal>dune runtest -p angstrom</literal> after the build (<literal>dune build -p angstrom</literal>) is complete. It uses <literal>alcotest</literal> as a build input (because it is needed to run the tests) and <literal>bigstringaf</literal> and <literal>result</literal> as propagated build inputs (thus they will also be available to libraries depending on this library). The library will be installed using the <literal>angstrom.install</literal> file that dune generates.
</para>
<programlisting>
@ -56,7 +36,7 @@ buildDunePackage rec {
doCheck = true;
meta = {
homepage = https://github.com/inhabitedtype/angstrom;
homepage = "https://github.com/inhabitedtype/angstrom";
description = "OCaml parser combinators built for speed and memory efficiency";
license = stdenv.lib.licenses.bsd3;
maintainers = with stdenv.lib.maintainers; [ sternenseemann ];
@ -65,11 +45,7 @@ buildDunePackage rec {
</programlisting>
<para>
Here is a second example, this time using a source archive generated with
<literal>dune-release</literal>. It is a good idea to use this archive when
it is available as it will usually contain substituted variables such as a
<literal>%%VERSION%%</literal> field. This library does not depend on any
other OCaml library and no tests are run after building it.
Here is a second example, this time using a source archive generated with <literal>dune-release</literal>. It is a good idea to use this archive when it is available as it will usually contain substituted variables such as a <literal>%%VERSION%%</literal> field. This library does not depend on any other OCaml library and no tests are run after building it.
</para>
<programlisting>
@ -87,7 +63,7 @@ buildDunePackage rec {
};
meta = with stdenv.lib; {
homepage = https://github.com/flowtype/ocaml-wtf8;
homepage = "https://github.com/flowtype/ocaml-wtf8";
description = "WTF-8 is a superset of UTF-8 that allows unpaired surrogates.";
license = licenses.mit;
maintainers = [ maintainers.eqyiel ];

View File

@ -4,24 +4,13 @@
<title>Perl</title>
<para>
Nixpkgs provides a function <varname>buildPerlPackage</varname>, a generic
package builder function for any Perl package that has a standard
<varname>Makefile.PL</varname>. Its implemented in
<link
Nixpkgs provides a function <varname>buildPerlPackage</varname>, a generic package builder function for any Perl package that has a standard <varname>Makefile.PL</varname>. Its implemented in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/perl-modules/generic"><filename>pkgs/development/perl-modules/generic</filename></link>.
</para>
<para>
Perl packages from CPAN are defined in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>,
rather than <filename>pkgs/all-packages.nix</filename>. Most Perl packages
are so straight-forward to build that they are defined here directly, rather
than having a separate function for each package called from
<filename>perl-packages.nix</filename>. However, more complicated packages
should be put in a separate file, typically in
<filename>pkgs/development/perl-modules</filename>. Here is an example of the
former:
Perl packages from CPAN are defined in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>, rather than <filename>pkgs/all-packages.nix</filename>. Most Perl packages are so straight-forward to build that they are defined here directly, rather than having a separate function for each package called from <filename>perl-packages.nix</filename>. However, more complicated packages should be put in a separate file, typically in <filename>pkgs/development/perl-modules</filename>. Here is an example of the former:
<programlisting>
ClassC3 = buildPerlPackage rec {
name = "Class-C3-0.21";
@ -31,32 +20,22 @@ ClassC3 = buildPerlPackage rec {
};
};
</programlisting>
Note the use of <literal>mirror://cpan/</literal>, and the
<literal>${name}</literal> in the URL definition to ensure that the name
attribute is consistent with the source that were actually downloading.
Perl packages are made available in <filename>all-packages.nix</filename>
through the variable <varname>perlPackages</varname>. For instance, if you
have a package that needs <varname>ClassC3</varname>, you would typically
write
Note the use of <literal>mirror://cpan/</literal>, and the <literal>${name}</literal> in the URL definition to ensure that the name attribute is consistent with the source that were actually downloading. Perl packages are made available in <filename>all-packages.nix</filename> through the variable <varname>perlPackages</varname>. For instance, if you have a package that needs <varname>ClassC3</varname>, you would typically write
<programlisting>
foo = import ../path/to/foo.nix {
inherit stdenv fetchurl ...;
inherit (perlPackages) ClassC3;
};
</programlisting>
in <filename>all-packages.nix</filename>. You can test building a Perl
package as follows:
in <filename>all-packages.nix</filename>. You can test building a Perl package as follows:
<screen>
<prompt>$ </prompt>nix-build -A perlPackages.ClassC3
</screen>
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the
start of the name attribute, so the package above is actually called
<literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the start of the name attribute, so the package above is actually called <literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<screen>
<prompt>$ </prompt>nix-env -i perl-Class-C3
</screen>
(Of course you can also install using the attribute name: <literal>nix-env -i
-A perlPackages.ClassC3</literal>.)
(Of course you can also install using the attribute name: <literal>nix-env -i -A perlPackages.ClassC3</literal>.)
</para>
<para>
@ -64,40 +43,24 @@ foo = import ../path/to/foo.nix {
<orderedlist>
<listitem>
<para>
In the configure phase, it calls <literal>perl Makefile.PL</literal> to
generate a Makefile. You can set the variable
<varname>makeMakerFlags</varname> to pass flags to
<filename>Makefile.PL</filename>
In the configure phase, it calls <literal>perl Makefile.PL</literal> to generate a Makefile. You can set the variable <varname>makeMakerFlags</varname> to pass flags to <filename>Makefile.PL</filename>
</para>
</listitem>
<listitem>
<para>
It adds the contents of the <envar>PERL5LIB</envar> environment variable
to <literal>#! .../bin/perl</literal> line of Perl scripts as
<literal>-I<replaceable>dir</replaceable></literal> flags. This ensures
that a script can find its dependencies. (This can cause this shebang line
to become too long for Darwin to handle; see the note below.)
It adds the contents of the <envar>PERL5LIB</envar> environment variable to <literal>#! .../bin/perl</literal> line of Perl scripts as <literal>-I<replaceable>dir</replaceable></literal> flags. This ensures that a script can find its dependencies. (This can cause this shebang line to become too long for Darwin to handle; see the note below.)
</para>
</listitem>
<listitem>
<para>
In the fixup phase, it writes the propagated build inputs
(<varname>propagatedBuildInputs</varname>) to the file
<filename>$out/nix-support/propagated-user-env-packages</filename>.
<command>nix-env</command> recursively installs all packages listed in
this file when you install a package that has it. This ensures that a Perl
package can find its dependencies.
In the fixup phase, it writes the propagated build inputs (<varname>propagatedBuildInputs</varname>) to the file <filename>$out/nix-support/propagated-user-env-packages</filename>. <command>nix-env</command> recursively installs all packages listed in this file when you install a package that has it. This ensures that a Perl package can find its dependencies.
</para>
</listitem>
</orderedlist>
</para>
<para>
<varname>buildPerlPackage</varname> is built on top of
<varname>stdenv</varname>, so everything can be customised in the usual way.
For instance, the <literal>BerkeleyDB</literal> module has a
<varname>preConfigure</varname> hook to generate a configuration file used by
<filename>Makefile.PL</filename>:
<varname>buildPerlPackage</varname> is built on top of <varname>stdenv</varname>, so everything can be customised in the usual way. For instance, the <literal>BerkeleyDB</literal> module has a <varname>preConfigure</varname> hook to generate a configuration file used by <filename>Makefile.PL</filename>:
<programlisting>
{ buildPerlPackage, fetchurl, db }:
@ -118,12 +81,7 @@ buildPerlPackage rec {
</para>
<para>
Dependencies on other Perl packages can be specified in the
<varname>buildInputs</varname> and <varname>propagatedBuildInputs</varname>
attributes. If something is exclusively a build-time dependency, use
<varname>buildInputs</varname>; if its (also) a runtime dependency, use
<varname>propagatedBuildInputs</varname>. For instance, this builds a Perl
module that has runtime dependencies on a bunch of other modules:
Dependencies on other Perl packages can be specified in the <varname>buildInputs</varname> and <varname>propagatedBuildInputs</varname> attributes. If something is exclusively a build-time dependency, use <varname>buildInputs</varname>; if its (also) a runtime dependency, use <varname>propagatedBuildInputs</varname>. For instance, this builds a Perl module that has runtime dependencies on a bunch of other modules:
<programlisting>
ClassC3Componentised = buildPerlPackage rec {
name = "Class-C3-Componentised-1.0004";
@ -139,11 +97,7 @@ ClassC3Componentised = buildPerlPackage rec {
</para>
<para>
On Darwin, if a script has too many
<literal>-I<replaceable>dir</replaceable></literal> flags in its first line
(its “shebang line”), it will not run. This can be worked around by calling
the <literal>shortenPerlShebang</literal> function from the
<literal>postInstall</literal> phase:
On Darwin, if a script has too many <literal>-I<replaceable>dir</replaceable></literal> flags in its first line (its “shebang line”), it will not run. This can be worked around by calling the <literal>shortenPerlShebang</literal> function from the <literal>postInstall</literal> phase:
<programlisting>
{ stdenv, buildPerlPackage, fetchurl, shortenPerlShebang }:
@ -162,20 +116,14 @@ ImageExifTool = buildPerlPackage {
'';
};
</programlisting>
This will remove the <literal>-I</literal> flags from the shebang line,
rewrite them in the <literal>use lib</literal> form, and put them on the next
line instead. This function can be given any number of Perl scripts as
arguments; it will modify them in-place.
This will remove the <literal>-I</literal> flags from the shebang line, rewrite them in the <literal>use lib</literal> form, and put them on the next line instead. This function can be given any number of Perl scripts as arguments; it will modify them in-place.
</para>
<section xml:id="ssec-generation-from-CPAN">
<title>Generation from CPAN</title>
<para>
Nix expressions for Perl packages can be generated (almost) automatically
from CPAN. This is done by the program
<command>nix-generate-from-cpan</command>, which can be installed as
follows:
Nix expressions for Perl packages can be generated (almost) automatically from CPAN. This is done by the program <command>nix-generate-from-cpan</command>, which can be installed as follows:
</para>
<screen>
@ -183,9 +131,7 @@ ImageExifTool = buildPerlPackage {
</screen>
<para>
This program takes a Perl module name, looks it up on CPAN, fetches and
unpacks the corresponding package, and prints a Nix expression on standard
output. For example:
This program takes a Perl module name, looks it up on CPAN, fetches and unpacks the corresponding package, and prints a Nix expression on standard output. For example:
<screen>
<prompt>$ </prompt>nix-generate-from-cpan XML::Simple
XMLSimple = buildPerlPackage rec {
@ -201,9 +147,7 @@ ImageExifTool = buildPerlPackage {
};
};
</screen>
The output can be pasted into
<filename>pkgs/top-level/perl-packages.nix</filename> or wherever else you
need it.
The output can be pasted into <filename>pkgs/top-level/perl-packages.nix</filename> or wherever else you need it.
</para>
</section>
@ -211,13 +155,7 @@ ImageExifTool = buildPerlPackage {
<title>Cross-compiling modules</title>
<para>
Nixpkgs has experimental support for cross-compiling Perl modules. In many
cases, it will just work out of the box, even for modules with native
extensions. Sometimes, however, the Makefile.PL for a module may
(indirectly) import a native module. In that case, you will need to make a
stub for that module that will satisfy the Makefile.PL and install it into
<filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>. See the
<varname>postInstall</varname> for <varname>DBI</varname> for an example.
Nixpkgs has experimental support for cross-compiling Perl modules. In many cases, it will just work out of the box, even for modules with native extensions. Sometimes, however, the Makefile.PL for a module may (indirectly) import a native module. In that case, you will need to make a stub for that module that will satisfy the Makefile.PL and install it into <filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>. See the <varname>postInstall</varname> for <varname>DBI</varname> for an example.
</para>
</section>
</section>

View File

@ -72,8 +72,9 @@ Now you can use the Python interpreter, as well as the extra packages (`numpy`,
##### Environment defined in `~/.config/nixpkgs/config.nix`
If you prefer to, you could also add the environment as a package override to the Nixpkgs set, e.g.
using `config.nix`,
If you prefer you could also add the environment as a package override to the
Nixpkgs set, e.g. using `config.nix`,
```nix
{ # ...
@ -83,15 +84,18 @@ using `config.nix`,
}
```
and install it in your profile with
```shell
nix-env -iA nixpkgs.myEnv
```
The environment is is installed by referring to the attribute, and considering
the `nixpkgs` channel was used.
##### Environment defined in `/etc/nixos/configuration.nix`
For the sake of completeness, here's another example how to install the environment system-wide.
For the sake of completeness, here's another example how to install the
environment system-wide.
```nix
{ # ...
@ -109,59 +113,96 @@ into a profile. For development you may need to use multiple environments.
`nix-shell` gives the possibility to temporarily load another environment, akin
to `virtualenv`.
There are two methods for loading a shell with Python packages. The first and recommended method
is to create an environment with `python.buildEnv` or `python.withPackages` and load that. E.g.
There are two methods for loading a shell with Python packages. The first and
recommended method is to create an environment with `python.buildEnv` or
`python.withPackages` and load that. E.g.
```sh
$ nix-shell -p 'python35.withPackages(ps: with ps; [ numpy toolz ])'
```
opens a shell from which you can launch the interpreter
```sh
[nix-shell:~] python3
```
The other method, which is not recommended, does not create an environment and requires you to list the packages directly,
The other method, which is not recommended, does not create an environment and
requires you to list the packages directly,
```sh
$ nix-shell -p python35.pkgs.numpy python35.pkgs.toolz
```
Again, it is possible to launch the interpreter from the shell.
The Python interpreter has the attribute `pkgs` which contains all Python libraries for that specific interpreter.
Again, it is possible to launch the interpreter from the shell. The Python
interpreter has the attribute `pkgs` which contains all Python libraries for
that specific interpreter.
##### Load environment from `.nix` expression
As explained in the Nix manual, `nix-shell` can also load an
expression from a `.nix` file. Say we want to have Python 3.5, `numpy`
and `toolz`, like before, in an environment. Consider a `shell.nix` file
with
```nix
with import <nixpkgs> {};
(python35.withPackages (ps: [ps.numpy ps.toolz])).env
```
Executing `nix-shell` gives you again a Nix shell from which you can run Python.
What's happening here?
1. We begin with importing the Nix Packages collections. `import <nixpkgs>` imports the `<nixpkgs>` function, `{}` calls it and the `with` statement brings all attributes of `nixpkgs` in the local scope. These attributes form the main package set.
1. We begin with importing the Nix Packages collections. `import <nixpkgs>`
imports the `<nixpkgs>` function, `{}` calls it and the `with` statement
brings all attributes of `nixpkgs` in the local scope. These attributes form
the main package set.
2. Then we create a Python 3.5 environment with the `withPackages` function.
3. The `withPackages` function expects us to provide a function as an argument that takes the set of all python packages and returns a list of packages to include in the environment. Here, we select the packages `numpy` and `toolz` from the package set.
3. The `withPackages` function expects us to provide a function as an argument
that takes the set of all python packages and returns a list of packages to
include in the environment. Here, we select the packages `numpy` and `toolz`
from the package set.
To combine this with `mkShell` you can:
```nix
with import <nixpkgs> {};
let
pythonEnv = python35.withPackages (ps: [
ps.numpy
ps.toolz
]);
in mkShell {
buildInputs = [
pythonEnv
hello
];
}
```
##### Execute command with `--run`
A convenient option with `nix-shell` is the `--run`
option, with which you can execute a command in the `nix-shell`. We can
e.g. directly open a Python shell
```sh
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
```
or run a script
```sh
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
```
##### `nix-shell` as shebang
In fact, for the second use case, there is a more convenient method. You can
add a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) to your script
In fact, for the second use case, there is a more convenient method. You can add
a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) to your script
specifying which dependencies `nix-shell` needs. With the following shebang, you
can just execute `./myscript.py`, and it will make available all dependencies and
run the script in the `python3` shell.
can just execute `./myscript.py`, and it will make available all dependencies
and run the script in the `python3` shell.
```py
#! /usr/bin/env nix-shell
@ -200,7 +241,7 @@ buildPythonPackage rec {
doCheck = false;
meta = with lib; {
homepage = https://github.com/pytoolz/toolz;
homepage = "https://github.com/pytoolz/toolz";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
@ -252,6 +293,7 @@ with import <nixpkgs> {};
in python35.withPackages (ps: [ps.numpy my_toolz])
).env
```
Executing `nix-shell` will result in an environment in which you can use
Python 3.5 and the `toolz` package. As you can see we had to explicitly mention
for which Python version we want to build a package.
@ -293,7 +335,7 @@ buildPythonPackage rec {
propagatedBuildInputs = [ numpy multipledispatch dateutil ];
meta = with lib; {
homepage = https://github.com/ContinuumIO/datashape;
homepage = "https://github.com/ContinuumIO/datashape";
description = "A data description language";
license = licenses.bsd2;
maintainers = with maintainers; [ fridh ];
@ -327,7 +369,7 @@ buildPythonPackage rec {
meta = with lib; {
description = "Pythonic binding for the libxml2 and libxslt libraries";
homepage = https://lxml.de;
homepage = "https://lxml.de";
license = licenses.bsd3;
maintainers = with maintainers; [ sjourdois ];
};
@ -337,12 +379,12 @@ buildPythonPackage rec {
In this example `lxml` and Nix are able to work out exactly where the relevant
files of the dependencies are. This is not always the case.
The example below shows bindings to The Fastest Fourier Transform in the West, commonly known as
FFTW. On Nix we have separate packages of FFTW for the different types of floats
(`"single"`, `"double"`, `"long-double"`). The bindings need all three types,
and therefore we add all three as `buildInputs`. The bindings don't expect to
find each of them in a different folder, and therefore we have to set `LDFLAGS`
and `CFLAGS`.
The example below shows bindings to The Fastest Fourier Transform in the West,
commonly known as FFTW. On Nix we have separate packages of FFTW for the
different types of floats (`"single"`, `"double"`, `"long-double"`). The
bindings need all three types, and therefore we add all three as `buildInputs`.
The bindings don't expect to find each of them in a different folder, and
therefore we have to set `LDFLAGS` and `CFLAGS`.
```nix
{ lib, pkgs, buildPythonPackage, fetchPypi, numpy, scipy }:
@ -386,17 +428,18 @@ instead of installing the package this command creates a special link to the pro
That way, you can run updated code without having to reinstall after each and every change you make.
Development mode is also available. Let's see how you can use it.
In the previous Nix expression the source was fetched from an url. We can also refer to a local source instead using
`src = ./path/to/source/tree;`
In the previous Nix expression the source was fetched from an url. We can also
refer to a local source instead using `src = ./path/to/source/tree;`
If we create a `shell.nix` file which calls `buildPythonPackage`, and if `src`
is a local source, and if the local source has a `setup.py`, then development
mode is activated.
In the following example we create a simple environment that
has a Python 3.5 version of our package in it, as well as its dependencies and
other packages we like to have in the environment, all specified with `propagatedBuildInputs`.
Indeed, we can just add any package we like to have in our environment to `propagatedBuildInputs`.
In the following example we create a simple environment that has a Python 3.5
version of our package in it, as well as its dependencies and other packages we
like to have in the environment, all specified with `propagatedBuildInputs`.
Indeed, we can just add any package we like to have in our environment to
`propagatedBuildInputs`.
```nix
with import <nixpkgs> {};
@ -409,7 +452,8 @@ buildPythonPackage rec {
}
```
It is important to note that due to how development mode is implemented on Nix it is not possible to have multiple packages simultaneously in development mode.
It is important to note that due to how development mode is implemented on Nix
it is not possible to have multiple packages simultaneously in development mode.
### Organising your packages
@ -478,14 +522,14 @@ and in this case the `python35` interpreter is automatically used.
### Interpreters
Versions 2.7, 3.5, 3.6 and 3.7 of the CPython interpreter are available as
respectively `python27`, `python35`, `python36` and `python37`. The aliases
`python2` and `python3` correspond to respectively `python27` and
Versions 2.7, 3.5, 3.6, 3.7 and 3.8 of the CPython interpreter are available as
respectively `python27`, `python35`, `python36`, `python37` and `python38`. The
aliases `python2` and `python3` correspond to respectively `python27` and
`python37`. The default interpreter, `python`, maps to `python2`. The PyPy
interpreters compatible with Python 2.7 and 3 are available as `pypy27` and
`pypy3`, with aliases `pypy2` mapping to `pypy27` and `pypy` mapping to
`pypy2`. The Nix expressions for the interpreters can be
found in `pkgs/development/interpreters/python`.
`pypy3`, with aliases `pypy2` mapping to `pypy27` and `pypy` mapping to `pypy2`.
The Nix expressions for the interpreters can be found in
`pkgs/development/interpreters/python`.
All packages depending on any Python interpreter get appended
`out/{python.sitePackages}` to `$PYTHONPATH` if such directory
@ -514,9 +558,10 @@ Python libraries and applications that use `setuptools` or
`buildPythonApplication` functions. These two functions also support installing a `wheel`.
All Python packages reside in `pkgs/top-level/python-packages.nix` and all
applications elsewhere. In case a package is used as both a library and an application,
then the package should be in `pkgs/top-level/python-packages.nix` since only those packages are made
available for all interpreter versions. The preferred location for library expressions is in
applications elsewhere. In case a package is used as both a library and an
application, then the package should be in `pkgs/top-level/python-packages.nix`
since only those packages are made available for all interpreter versions. The
preferred location for library expressions is in
`pkgs/development/python-modules`. It is important that these packages are
called from `pkgs/top-level/python-packages.nix` and not elsewhere, to guarantee
the right version of the package is built.
@ -544,6 +589,7 @@ The `buildPythonPackage` function is implemented in
using setup hooks.
The following is an example:
```nix
{ lib, buildPythonPackage, fetchPypi, hypothesis, setuptools_scm, attrs, py, setuptools, six, pluggy }:
@ -590,38 +636,67 @@ as the interpreter unless overridden otherwise.
##### `buildPythonPackage` parameters
All parameters from `stdenv.mkDerivation` function are still supported. The following are specific to `buildPythonPackage`:
All parameters from `stdenv.mkDerivation` function are still supported. The
following are specific to `buildPythonPackage`:
* `catchConflicts ? true`: If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `disabled` ? false: If `true`, package is not build for the particular Python interpreter version.
* `catchConflicts ? true`: If `true`, abort package build if a package name
appears more than once in dependency tree. Default is `true`.
* `disabled` ? false: If `true`, package is not built for the particular Python
interpreter version.
* `dontWrapPythonPrograms ? false`: Skip wrapping of python programs.
* `permitUserSite ? false`: Skip setting the `PYTHONNOUSERSITE` environment variable in wrapped programs.
* `installFlags ? []`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"]`.
* `format ? "setuptools"`: Format of the source. Valid options are `"setuptools"`, `"pyproject"`, `"flit"`, `"wheel"`, and `"other"`. `"setuptools"` is for when the source has a `setup.py` and `setuptools` is used to build a wheel, `flit`, in case `flit` should be used to build a wheel, and `wheel` in case a wheel is provided. Use `other` when a custom `buildPhase` and/or `installPhase` is needed.
* `makeWrapperArgs ? []`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications to `""`.
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `permitUserSite ? false`: Skip setting the `PYTHONNOUSERSITE` environment
variable in wrapped programs.
* `installFlags ? []`: A list of strings. Arguments to be passed to `pip
install`. To pass options to `python setup.py install`, use
`--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"]`.
* `format ? "setuptools"`: Format of the source. Valid options are
`"setuptools"`, `"pyproject"`, `"flit"`, `"wheel"`, and `"other"`.
`"setuptools"` is for when the source has a `setup.py` and `setuptools` is
used to build a wheel, `flit`, in case `flit` should be used to build a wheel,
and `wheel` in case a wheel is provided. Use `other` when a custom
`buildPhase` and/or `installPhase` is needed.
* `makeWrapperArgs ? []`: A list of strings. Arguments to be passed to
`makeWrapper`, which wraps generated binaries. By default, the arguments to
`makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling
the binary. Additional arguments here can allow a developer to set environment
variables which will be available when the binary is run. For example,
`makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this
defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications
to `""`.
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages
in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `preShellHook`: Hook to execute commands before `shellHook`.
* `postShellHook`: Hook to execute commands after `shellHook`.
* `removeBinByteCode ? true`: Remove bytecode from `/bin`. Bytecode is only created when the filenames end with `.py`.
* `removeBinByteCode ? true`: Remove bytecode from `/bin`. Bytecode is only
created when the filenames end with `.py`.
* `setupPyGlobalFlags ? []`: List of flags passed to `setup.py` command.
* `setupPyBuildFlags ? []`: List of flags passed to `setup.py build_ext` command.
The `stdenv.mkDerivation` function accepts various parameters for describing build inputs (see "Specifying dependencies"). The following are of special
interest for Python packages, either because these are primarily used, or because their behaviour is different:
The `stdenv.mkDerivation` function accepts various parameters for describing
build inputs (see "Specifying dependencies"). The following are of special
interest for Python packages, either because these are primarily used, or
because their behaviour is different:
* `nativeBuildInputs ? []`: Build-time only dependencies. Typically executables as well as the items listed in `setup_requires`.
* `buildInputs ? []`: Build and/or run-time dependencies that need to be be compiled for the host machine. Typically non-Python libraries which are being linked.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These are added to `nativeBuildInputs` when `doCheck = true`. Items listed in `tests_require` go here.
* `propagatedBuildInputs ? []`: Aside from propagating dependencies, `buildPythonPackage` also injects code into and wraps executables with the paths included in this list. Items listed in `install_requires` go here.
* `nativeBuildInputs ? []`: Build-time only dependencies. Typically executables
as well as the items listed in `setup_requires`.
* `buildInputs ? []`: Build and/or run-time dependencies that need to be be
compiled for the host machine. Typically non-Python libraries which are being
linked.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These
are added to `nativeBuildInputs` when `doCheck = true`. Items listed in
`tests_require` go here.
* `propagatedBuildInputs ? []`: Aside from propagating dependencies,
`buildPythonPackage` also injects code into and wraps executables with the
paths included in this list. Items listed in `install_requires` go here.
##### Overriding Python packages
The `buildPythonPackage` function has a `overridePythonAttrs` method that
can be used to override the package. In the following example we create an
environment where we have the `blaze` package using an older version of `pandas`.
We override first the Python interpreter and pass
`packageOverrides` which contains the overrides for packages in the package set.
The `buildPythonPackage` function has a `overridePythonAttrs` method that can be
used to override the package. In the following example we create an environment
where we have the `blaze` package using an older version of `pandas`. We
override first the Python interpreter and pass `packageOverrides` which contains
the overrides for packages in the package set.
```nix
with import <nixpkgs> {};
@ -707,15 +782,18 @@ youtube-dl = with pythonPackages; toPythonApplication youtube-dl;
#### `toPythonModule` function
In some cases, such as bindings, a package is created using
`stdenv.mkDerivation` and added as attribute in `all-packages.nix`.
The Python bindings should be made available from `python-packages.nix`.
The `toPythonModule` function takes a derivation and makes certain Python-specific modifications.
`stdenv.mkDerivation` and added as attribute in `all-packages.nix`. The Python
bindings should be made available from `python-packages.nix`. The
`toPythonModule` function takes a derivation and makes certain Python-specific
modifications.
```nix
opencv = toPythonModule (pkgs.opencv.override {
enablePython = true;
pythonPackages = self;
});
```
Do pay attention to passing in the right Python version!
#### `python.buildEnv` function
@ -723,6 +801,7 @@ Do pay attention to passing in the right Python version!
Python environments can be created using the low-level `pkgs.buildEnv` function.
This example shows how to create an environment that has the Pyramid Web Framework.
Saving the following as `default.nix`
```nix
with import <nixpkgs> {};
@ -733,6 +812,7 @@ python.buildEnv.override {
```
and running `nix-build` will create
```
/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env
```
@ -742,6 +822,7 @@ with wrapped binaries in `bin/`.
You can also use the `env` attribute to create local environments with needed
packages installed. This is somewhat comparable to `virtualenv`. For example,
running `nix-shell` with the following `shell.nix`
```nix
with import <nixpkgs> {};
@ -759,7 +840,8 @@ specified packages in its path.
* `extraLibs`: List of packages installed inside the environment.
* `postBuild`: Shell command executed after the build of environment.
* `ignoreCollisions`: Ignore file collisions inside the environment (default is `false`).
* `permitUserSite`: Skip setting the `PYTHONNOUSERSITE` environment variable in wrapped binaries in the environment.
* `permitUserSite`: Skip setting the `PYTHONNOUSERSITE` environment variable in
wrapped binaries in the environment.
#### `python.withPackages` function
@ -767,15 +849,17 @@ The `python.withPackages` function provides a simpler interface to the `python.b
It takes a function as an argument that is passed the set of python packages and returns the list
of the packages to be included in the environment. Using the `withPackages` function, the previous
example for the Pyramid Web Framework environment can be written like this:
```nix
with import <nixpkgs> {};
python.withPackages (ps: [ps.pyramid])
```
`withPackages` passes the correct package set for the specific interpreter version as an
argument to the function. In the above example, `ps` equals `pythonPackages`.
But you can also easily switch to using python3:
`withPackages` passes the correct package set for the specific interpreter
version as an argument to the function. In the above example, `ps` equals
`pythonPackages`. But you can also easily switch to using python3:
```nix
with import <nixpkgs> {};
@ -784,27 +868,35 @@ python3.withPackages (ps: [ps.pyramid])
Now, `ps` is set to `python3Packages`, matching the version of the interpreter.
As `python.withPackages` simply uses `python.buildEnv` under the hood, it also supports the `env`
attribute. The `shell.nix` file from the previous section can thus be also written like this:
As `python.withPackages` simply uses `python.buildEnv` under the hood, it also
supports the `env` attribute. The `shell.nix` file from the previous section can
thus be also written like this:
```nix
with import <nixpkgs> {};
(python36.withPackages (ps: [ps.numpy ps.requests])).env
```
In contrast to `python.buildEnv`, `python.withPackages` does not support the more advanced options
such as `ignoreCollisions = true` or `postBuild`. If you need them, you have to use `python.buildEnv`.
In contrast to `python.buildEnv`, `python.withPackages` does not support the
more advanced options such as `ignoreCollisions = true` or `postBuild`. If you
need them, you have to use `python.buildEnv`.
Python 2 namespace packages may provide `__init__.py` that collide. In that case `python.buildEnv`
should be used with `ignoreCollisions = true`.
Python 2 namespace packages may provide `__init__.py` that collide. In that case
`python.buildEnv` should be used with `ignoreCollisions = true`.
#### Setup hooks
The following are setup hooks specifically for Python packages. Most of these are
used in `buildPythonPackage`.
The following are setup hooks specifically for Python packages. Most of these
are used in `buildPythonPackage`.
- `eggUnpackhook` to move an egg to the correct folder so it can be installed
with the `eggInstallHook`
- `eggBuildHook` to skip building for eggs.
- `eggInstallHook` to install eggs.
- `flitBuildHook` to build a wheel using `flit`.
- `pipBuildHook` to build a wheel using `pip` and PEP 517. Note a build system (e.g. `setuptools` or `flit`) should still be added as `nativeBuildInput`.
- `pipBuildHook` to build a wheel using `pip` and PEP 517. Note a build system
(e.g. `setuptools` or `flit`) should still be added as `nativeBuildInput`.
- `pipInstallHook` to install wheels.
- `pytestCheckHook` to run tests with `pytest`.
- `pythonCatchConflictsHook` to check whether a Python package is not already existing.
@ -812,7 +904,10 @@ used in `buildPythonPackage`.
- `pythonRemoveBinBytecode` to remove bytecode from the `/bin` folder.
- `setuptoolsBuildHook` to build a wheel using `setuptools`.
- `setuptoolsCheckHook` to run tests with `python setup.py test`.
- `wheelUnpackHook` to move a wheel to the correct folder so it can be installed with the `pipInstallHook`.
- `venvShellHook` to source a Python 3 `venv` at the `venvDir` location. A
`venv` is created if it does not yet exist.
- `wheelUnpackHook` to move a wheel to the correct folder so it can be installed
with the `pipInstallHook`.
### Development mode
@ -834,11 +929,11 @@ pythonPackages.buildPythonPackage {
}
```
Running `nix-shell` with no arguments should give you
the environment in which the package would be built with
`nix-build`.
Running `nix-shell` with no arguments should give you the environment in which
the package would be built with `nix-build`.
Shortcut to setup environments with C headers/libraries and python packages:
```shell
nix-shell -p pythonPackages.pyramid zlib libjpeg git
```
@ -850,20 +945,22 @@ Note: There is a boolean value `lib.inNixShell` set to `true` if nix-shell is in
Packages inside nixpkgs are written by hand. However many tools exist in
community to help save time. No tool is preferred at the moment.
- [python2nix](https://github.com/proger/python2nix) by Vladimir Kirillov
- [pypi2nix](https://github.com/garbas/pypi2nix) by Rok Garbas
- [pypi2nix](https://github.com/offlinehacker/pypi2nix) by Jaka Hudoklin
- [pypi2nix](https://github.com/nix-community/pypi2nix): Generate Nix
expressions for your Python project. Note that [sharing derivations from
pypi2nix with nixpkgs is possible but not
encouraged](https://github.com/nix-community/pypi2nix/issues/222#issuecomment-443497376).
- [python2nix](https://github.com/proger/python2nix) by Vladimir Kirillov.
### Deterministic builds
The Python interpreters are now built deterministically.
Minor modifications had to be made to the interpreters in order to generate
deterministic bytecode. This has security implications and is relevant for
those using Python in a `nix-shell`.
The Python interpreters are now built deterministically. Minor modifications had
to be made to the interpreters in order to generate deterministic bytecode. This
has security implications and is relevant for those using Python in a
`nix-shell`.
When the environment variable `DETERMINISTIC_BUILD` is set, all bytecode will have timestamp 1.
The `buildPythonPackage` function sets `DETERMINISTIC_BUILD=1` and
[PYTHONHASHSEED=0](https://docs.python.org/3.5/using/cmdline.html#envvar-PYTHONHASHSEED).
When the environment variable `DETERMINISTIC_BUILD` is set, all bytecode will
have timestamp 1. The `buildPythonPackage` function sets `DETERMINISTIC_BUILD=1`
and [PYTHONHASHSEED=0](https://docs.python.org/3.5/using/cmdline.html#envvar-PYTHONHASHSEED).
Both are also exported in `nix-shell`.
@ -878,9 +975,10 @@ example of such a situation is when `py.test` is used.
#### Common issues
- Non-working tests can often be deselected. By default `buildPythonPackage` runs `python setup.py test`.
Most python modules follows the standard test protocol where the pytest runner can be used instead.
`py.test` supports a `-k` parameter to ignore test methods or classes:
* Non-working tests can often be deselected. By default `buildPythonPackage`
runs `python setup.py test`. Most python modules follows the standard test
protocol where the pytest runner can be used instead. `py.test` supports a
`-k` parameter to ignore test methods or classes:
```nix
buildPythonPackage {
@ -892,7 +990,8 @@ example of such a situation is when `py.test` is used.
'';
}
```
- Tests that attempt to access `$HOME` can be fixed by using the following work-around before running tests (e.g. `preCheck`): `export HOME=$(mktemp -d)`
* Tests that attempt to access `$HOME` can be fixed by using the following
work-around before running tests (e.g. `preCheck`): `export HOME=$(mktemp -d)`
## FAQ
@ -904,8 +1003,9 @@ should also be done when packaging `A`.
### How to override a Python package?
We can override the interpreter and pass `packageOverrides`.
In the following example we rename the `pandas` package and build it.
We can override the interpreter and pass `packageOverrides`. In the following
example we rename the `pandas` package and build it.
```nix
with import <nixpkgs> {};
@ -918,14 +1018,16 @@ with import <nixpkgs> {};
in python.withPackages(ps: [ps.pandas])).env
```
Using `nix-build` on this expression will build an environment that contains the
package `pandas` but with the new name `foo`.
All packages in the package set will use the renamed package.
A typical use case is to switch to another version of a certain package.
For example, in the Nixpkgs repository we have multiple versions of `django` and `scipy`.
In the following example we use a different version of `scipy` and create an environment that uses it.
All packages in the Python package set will now use the updated `scipy` version.
All packages in the package set will use the renamed package. A typical use case
is to switch to another version of a certain package. For example, in the
Nixpkgs repository we have multiple versions of `django` and `scipy`. In the
following example we use a different version of `scipy` and create an
environment that uses it. All packages in the Python package set will now use
the updated `scipy` version.
```nix
with import <nixpkgs> {};
@ -937,10 +1039,13 @@ with import <nixpkgs> {};
in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
).env
```
The requested package `blaze` depends on `pandas` which itself depends on `scipy`.
If you want the whole of Nixpkgs to use your modifications, then you can use `overlays`
as explained in this manual. In the following example we build a `inkscape` using a different version of `numpy`.
If you want the whole of Nixpkgs to use your modifications, then you can use
`overlays` as explained in this manual. In the following example we build a
`inkscape` using a different version of `numpy`.
```nix
let
pkgs = import <nixpkgs> {};
@ -961,19 +1066,28 @@ Executing `python setup.py bdist_wheel` in a `nix-shell `fails with
ValueError: ZIP does not support timestamps before 1980
```
This is because files from the Nix store (which have a timestamp of the UNIX epoch of January 1, 1970) are included in the .ZIP, but .ZIP archives follow the DOS convention of counting timestamps from 1980.
This is because files from the Nix store (which have a timestamp of the UNIX
epoch of January 1, 1970) are included in the .ZIP, but .ZIP archives follow the
DOS convention of counting timestamps from 1980.
The command `bdist_wheel` reads the `SOURCE_DATE_EPOCH` environment variable, which `nix-shell` sets to 1. Unsetting this variable or giving it a value corresponding to 1980 or later enables building wheels.
The command `bdist_wheel` reads the `SOURCE_DATE_EPOCH` environment variable,
which `nix-shell` sets to 1. Unsetting this variable or giving it a value
corresponding to 1980 or later enables building wheels.
Use 1980 as timestamp:
```shell
nix-shell --run "SOURCE_DATE_EPOCH=315532800 python3 setup.py bdist_wheel"
```
or the current time:
```shell
nix-shell --run "SOURCE_DATE_EPOCH=$(date +%s) python3 setup.py bdist_wheel"
```
or unset `SOURCE_DATE_EPOCH`:
```shell
nix-shell --run "unset SOURCE_DATE_EPOCH; python3 setup.py bdist_wheel"
```
@ -981,13 +1095,18 @@ nix-shell --run "unset SOURCE_DATE_EPOCH; python3 setup.py bdist_wheel"
### `install_data` / `data_files` problems
If you get the following error:
```
could not create '/nix/store/6l1bvljpy8gazlsw2aw9skwwp4pmvyxw-python-2.7.8/etc':
Permission denied
```
This is a [known bug](https://github.com/pypa/setuptools/issues/130) in `setuptools`.
Setuptools `install_data` does not respect `--prefix`. An example of such package using the feature is `pkgs/tools/X11/xpra/default.nix`.
This is a [known bug](https://github.com/pypa/setuptools/issues/130) in
`setuptools`. Setuptools `install_data` does not respect `--prefix`. An example
of such package using the feature is `pkgs/tools/X11/xpra/default.nix`.
As workaround install it as an extra `preInstall` step:
```shell
${python.interpreter} setup.py install_data --install-dir=$out --root=$out
sed -i '/ = data\_files/d' setup.py
@ -1008,49 +1127,102 @@ If you want to create a Python environment for development, then the recommended
method is to use `nix-shell`, either with or without the `python.buildEnv`
function.
### How to consume python modules using pip in a virtualenv like I am used to on other Operating Systems ?
### How to consume python modules using pip in a virtual environment like I am used to on other Operating Systems?
This is an example of a `default.nix` for a `nix-shell`, which allows to consume a `virtualenv` environment,
and install python modules through `pip` the traditional way.
While this approach is not very idiomatic from Nix perspective, it can still be
useful when dealing with pre-existing projects or in situations where it's not
feasible or desired to write derivations for all required dependencies.
Create this `default.nix` file, together with a `requirements.txt` and simply execute `nix-shell`.
This is an example of a `default.nix` for a `nix-shell`, which allows to consume
a virtual environment created by `venv`, and install python modules through
`pip` the traditional way.
Create this `default.nix` file, together with a `requirements.txt` and simply
execute `nix-shell`.
```nix
with import <nixpkgs> {};
with python27Packages;
with import <nixpkgs> { };
stdenv.mkDerivation {
let
pythonPackages = python3Packages;
in pkgs.mkShell rec {
name = "impurePythonEnv";
src = null;
venvDir = "./.venv";
buildInputs = [
# these packages are required for virtualenv and pip to work:
#
python27Full
python27Packages.virtualenv
python27Packages.pip
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.txt require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
# A python interpreter including the 'venv' module is required to bootstrap
# the environment.
pythonPackages.python
# This execute some shell code to initialize a venv in $venvDir before
# dropping into the shell
pythonPackages.venvShellHook
# Those are dependencies that we would like to use from nixpkgs, which will
# add them to PYTHONPATH and thus make them accessible from within the venv.
pythonPackages.numpy
pythonPackages.requests
# In this particular example, in order to compile any binary extensions they may
# require, the python modules listed in the hypothetical requirements.txt need
# the following packages to be installed locally:
taglib
openssl
git
libxml2
libxslt
libzip
stdenv
zlib
];
# Now we can execute any commands within the virtual environment.
# This is optional and can be left out to run pip manually.
postShellHook = ''
pip install -r requirements.txt
'';
}
```
In case the supplied venvShellHook is insufficient, or when python 2 support is
needed, you can define your own shell hook and adapt to your needs like in the
following example:
```nix
with import <nixpkgs> { };
let
venvDir = "./.venv";
pythonPackages = python3Packages;
in pkgs.mkShell rec {
name = "impurePythonEnv";
buildInputs = [
pythonPackages.python
# Needed when using python 2.7
# pythonPackages.virtualenv
# ...
];
# This is very close to how venvShellHook is implemented, but
# adapted to use 'virtualenv'
shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s)
virtualenv --no-setuptools venv
export PATH=$PWD/venv/bin:$PATH
if [ -d "${venvDir}" ]; then
echo "Skipping venv creation, '${venvDir}' already exists"
else
echo "Creating new venv environment in path: '${venvDir}'"
# Note that the module venv was only introduced in python 3, so for 2.7
# this needs to be replaced with a call to virtualenv
${pythonPackages.python.interpreter} -m venv "${venvDir}"
fi
# Under some circumstances it might be necessary to add your virtual
# environment to PYTHONPATH, which you can do here too;
# PYTHONPATH=$PWD/${venvDir}/${pythonPackages.python.sitePackages}/:$PYTHONPATH
source "${venvDir}/bin/activate"
# As in the previous example, this is optional.
pip install -r requirements.txt
'';
}
@ -1082,11 +1254,11 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul
```
`pythonPackages.zerobin` is now globally overridden. All packages and also the
`zerobin` NixOS service use the new definition.
Note that `python-super` refers to the old package set and `python-self`
to the new, overridden version.
`zerobin` NixOS service use the new definition. Note that `python-super` refers
to the old package set and `python-self` to the new, overridden version.
To modify only a Python package set instead of a whole Python derivation, use this snippet:
To modify only a Python package set instead of a whole Python derivation, use
this snippet:
```nix
myPythonPackages = pythonPackages.override {
@ -1118,11 +1290,12 @@ self: super: {
### How to use Intel's MKL with numpy and scipy?
A `site.cfg` is created that configures BLAS based on the `blas` parameter
of the `numpy` derivation. By passing in `mkl`, `numpy` and packages depending
on `numpy` will be built with `mkl`.
A `site.cfg` is created that configures BLAS based on the `blas` parameter of
the `numpy` derivation. By passing in `mkl`, `numpy` and packages depending on
`numpy` will be built with `mkl`.
The following is an overlay that configures `numpy` to use `mkl`:
```nix
self: super: {
python37 = super.python37.override {
@ -1158,10 +1331,21 @@ In a `setup.py` or `setup.cfg` it is common to declare dependencies:
Following rules are desired to be respected:
* Python libraries are called from `python-packages.nix` and packaged with `buildPythonPackage`. The expression of a library should be in `pkgs/development/python-modules/<name>/default.nix`. Libraries in `pkgs/top-level/python-packages.nix` are sorted quasi-alphabetically to avoid merge conflicts.
* Python applications live outside of `python-packages.nix` and are packaged with `buildPythonApplication`.
* Python libraries are called from `python-packages.nix` and packaged with
`buildPythonPackage`. The expression of a library should be in
`pkgs/development/python-modules/<name>/default.nix`. Libraries in
`pkgs/top-level/python-packages.nix` are sorted quasi-alphabetically to avoid
merge conflicts.
* Python applications live outside of `python-packages.nix` and are packaged
with `buildPythonApplication`.
* Make sure libraries build for all Python interpreters.
* By default we enable tests. Make sure the tests are found and, in the case of libraries, are passing for all interpreters. If certain tests fail they can be disabled individually. Try to avoid disabling the tests altogether. In any case, when you disable tests, leave a comment explaining why.
* Commit names of Python libraries should reflect that they are Python libraries, so write for example `pythonPackages.numpy: 1.11 -> 1.12`.
* Attribute names in `python-packages.nix` should be normalized according to [PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names).
This means that characters should be converted to lowercase and `.` and `_` should be replaced by a single `-` (foo-bar-baz instead of Foo__Bar.baz )
* By default we enable tests. Make sure the tests are found and, in the case of
libraries, are passing for all interpreters. If certain tests fail they can be
disabled individually. Try to avoid disabling the tests altogether. In any
case, when you disable tests, leave a comment explaining why.
* Commit names of Python libraries should reflect that they are Python
libraries, so write for example `pythonPackages.numpy: 1.11 -> 1.12`.
* Attribute names in `python-packages.nix` should be normalized according to
[PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names). This
means that characters should be converted to lowercase and `.` and `_` should
be replaced by a single `-` (foo-bar-baz instead of Foo__Bar.baz )

View File

@ -4,16 +4,12 @@
<title>Qt</title>
<para>
This section describes the differences between Nix expressions for Qt
libraries and applications and Nix expressions for other C++ software. Some
knowledge of the latter is assumed. There are primarily two problems which
the Qt infrastructure is designed to address: ensuring consistent versioning
of all dependencies and finding dependencies at runtime.
This section describes the differences between Nix expressions for Qt libraries and applications and Nix expressions for other C++ software. Some knowledge of the latter is assumed. There are primarily two problems which the Qt infrastructure is designed to address: ensuring consistent versioning of all dependencies and finding dependencies at runtime.
</para>
<example xml:id='qt-default-nix'>
<title>Nix expression for a Qt package (<filename>default.nix</filename>)</title>
<programlisting>
<title>Nix expression for a Qt package (<filename>default.nix</filename>)</title>
<programlisting>
{ mkDerivation, lib, qtbase }: <co xml:id='qt-default-nix-co-1' />
mkDerivation { <co xml:id='qt-default-nix-co-2' />
@ -26,53 +22,36 @@ mkDerivation { <co xml:id='qt-default-nix-co-2' />
</example>
<calloutlist>
<callout arearefs='qt-default-nix-co-1'>
<para>
Import <literal>mkDerivation</literal> and Qt (such as
<literal>qtbase</literal> modules directly. <emphasis>Do not</emphasis>
import Qt package sets; the Qt versions of dependencies may not be
coherent, causing build and runtime failures.
</para>
</callout>
<callout arearefs='qt-default-nix-co-2'>
<para>
Use <literal>mkDerivation</literal> instead of
<literal>stdenv.mkDerivation</literal>. <literal>mkDerivation</literal>
is a wrapper around <literal>stdenv.mkDerivation</literal> which
applies some Qt-specific settings.
This deriver accepts the same arguments as
<literal>stdenv.mkDerivation</literal>; refer to
<xref linkend='chap-stdenv' /> for details.
</para>
<para>
To use another deriver instead of
<literal>stdenv.mkDerivation</literal>, use
<literal>mkDerivationWith</literal>:
<callout arearefs='qt-default-nix-co-1'>
<para>
Import <literal>mkDerivation</literal> and Qt (such as <literal>qtbase</literal> modules directly. <emphasis>Do not</emphasis> import Qt package sets; the Qt versions of dependencies may not be coherent, causing build and runtime failures.
</para>
</callout>
<callout arearefs='qt-default-nix-co-2'>
<para>
Use <literal>mkDerivation</literal> instead of <literal>stdenv.mkDerivation</literal>. <literal>mkDerivation</literal> is a wrapper around <literal>stdenv.mkDerivation</literal> which applies some Qt-specific settings. This deriver accepts the same arguments as <literal>stdenv.mkDerivation</literal>; refer to <xref linkend='chap-stdenv' /> for details.
</para>
<para>
To use another deriver instead of <literal>stdenv.mkDerivation</literal>, use <literal>mkDerivationWith</literal>:
<programlisting>
mkDerivationWith myDeriver {
# ...
}
</programlisting>
If you cannot use <literal>mkDerivationWith</literal>, please refer to
<xref linkend='qt-runtime-dependencies' />.
</para>
</callout>
<callout arearefs='qt-default-nix-co-3'>
<para>
<literal>mkDerivation</literal> accepts the same arguments as
<literal>stdenv.mkDerivation</literal>, such as
<literal>buildInputs</literal>.
</para>
</callout>
If you cannot use <literal>mkDerivationWith</literal>, please refer to <xref linkend='qt-runtime-dependencies' />.
</para>
</callout>
<callout arearefs='qt-default-nix-co-3'>
<para>
<literal>mkDerivation</literal> accepts the same arguments as <literal>stdenv.mkDerivation</literal>, such as <literal>buildInputs</literal>.
</para>
</callout>
</calloutlist>
<formalpara xml:id='qt-runtime-dependencies'>
<title>Locating runtime dependencies</title>
<para>
Qt applications need to be wrapped to find runtime dependencies. If you
cannot use <literal>mkDerivation</literal> or
<literal>mkDerivationWith</literal> above, include
<literal>wrapQtAppsHook</literal> in <literal>nativeBuildInputs</literal>:
<title>Locating runtime dependencies</title>
<para>
Qt applications need to be wrapped to find runtime dependencies. If you cannot use <literal>mkDerivation</literal> or <literal>mkDerivationWith</literal> above, include <literal>wrapQtAppsHook</literal> in <literal>nativeBuildInputs</literal>:
<programlisting>
stdenv.mkDerivation {
# ...
@ -80,13 +59,11 @@ stdenv.mkDerivation {
nativeBuildInputs = [ wrapQtAppsHook ];
}
</programlisting>
</para>
</para>
</formalpara>
<para>
Entries added to <literal>qtWrapperArgs</literal> are used to modify the
wrappers created by <literal>wrapQtAppsHook</literal>. The entries are
passed as arguments to <xref linkend='fun-wrapProgram' />.
Entries added to <literal>qtWrapperArgs</literal> are used to modify the wrappers created by <literal>wrapQtAppsHook</literal>. The entries are passed as arguments to <xref linkend='fun-wrapProgram' />.
<programlisting>
mkDerivation {
# ...
@ -97,10 +74,7 @@ mkDerivation {
</para>
<para>
Set <literal>dontWrapQtApps</literal> to stop applications from being
wrapped automatically. It is required to wrap applications manually with
<literal>wrapQtApp</literal>, using the syntax of
<xref linkend='fun-wrapProgram' />:
Set <literal>dontWrapQtApps</literal> to stop applications from being wrapped automatically. It is required to wrap applications manually with <literal>wrapQtApp</literal>, using the syntax of <xref linkend='fun-wrapProgram' />:
<programlisting>
mkDerivation {
# ...
@ -115,16 +89,12 @@ mkDerivation {
<note>
<para>
<literal>wrapQtAppsHook</literal> ignores files that are non-ELF executables.
This means that scripts won't be automatically wrapped so you'll need to manually
wrap them as previously mentioned. An example of when you'd always need to do this
is with Python applications that use PyQT.
<literal>wrapQtAppsHook</literal> ignores files that are non-ELF executables. This means that scripts won't be automatically wrapped so you'll need to manually wrap them as previously mentioned. An example of when you'd always need to do this is with Python applications that use PyQT.
</para>
</note>
<para>
Libraries are built with every available version of Qt. Use the <literal>meta.broken</literal>
attribute to disable the package for unsupported Qt versions:
Libraries are built with every available version of Qt. Use the <literal>meta.broken</literal> attribute to disable the package for unsupported Qt versions:
<programlisting>
mkDerivation {
# ...
@ -136,13 +106,11 @@ mkDerivation {
</para>
<formalpara>
<title>Adding a library to Nixpkgs</title>
<para>
Add a Qt library to <filename>all-packages.nix</filename> by adding it to the
collection inside <literal>mkLibsForQt5</literal>. This ensures that the
library is built with every available version of Qt as needed.
<example xml:id='qt-library-all-packages-nix'>
<title>Adding a Qt library to <filename>all-packages.nix</filename></title>
<title>Adding a library to Nixpkgs</title>
<para>
Add a Qt library to <filename>all-packages.nix</filename> by adding it to the collection inside <literal>mkLibsForQt5</literal>. This ensures that the library is built with every available version of Qt as needed.
<example xml:id='qt-library-all-packages-nix'>
<title>Adding a Qt library to <filename>all-packages.nix</filename></title>
<programlisting>
{
# ...
@ -156,19 +124,16 @@ mkDerivation {
# ...
}
</programlisting>
</example>
</para>
</example>
</para>
</formalpara>
<formalpara>
<title>Adding an application to Nixpkgs</title>
<para>
Add a Qt application to <filename>all-packages.nix</filename> using
<literal>libsForQt5.callPackage</literal> instead of the usual
<literal>callPackage</literal>. The former ensures that all dependencies
are built with the same version of Qt.
<example xml:id='qt-application-all-packages-nix'>
<title>Adding a Qt application to <filename>all-packages.nix</filename></title>
<title>Adding an application to Nixpkgs</title>
<para>
Add a Qt application to <filename>all-packages.nix</filename> using <literal>libsForQt5.callPackage</literal> instead of the usual <literal>callPackage</literal>. The former ensures that all dependencies are built with the same version of Qt.
<example xml:id='qt-application-all-packages-nix'>
<title>Adding a Qt application to <filename>all-packages.nix</filename></title>
<programlisting>
{
# ...
@ -178,8 +143,7 @@ mkDerivation {
# ...
}
</programlisting>
</example>
</para>
</example>
</para>
</formalpara>
</section>

View File

@ -1,5 +1,5 @@
R packages
==========
R
=
## Installation

View File

@ -4,11 +4,7 @@
<title>Ruby</title>
<para>
There currently is support to bundle applications that are packaged as Ruby
gems. The utility "bundix" allows you to write a
<filename>Gemfile</filename>, let bundler create a
<filename>Gemfile.lock</filename>, and then convert this into a nix
expression that contains all Gem dependencies automatically.
There currently is support to bundle applications that are packaged as Ruby gems. The utility "bundix" allows you to write a <filename>Gemfile</filename>, let bundler create a <filename>Gemfile.lock</filename>, and then convert this into a nix expression that contains all Gem dependencies automatically.
</para>
<para>
@ -45,9 +41,7 @@ bundlerEnv rec {
</screen>
<para>
Please check in the <filename>Gemfile</filename>,
<filename>Gemfile.lock</filename> and the <filename>gemset.nix</filename> so
future updates can be run easily.
Please check in the <filename>Gemfile</filename>, <filename>Gemfile.lock</filename> and the <filename>gemset.nix</filename> so future updates can be run easily.
</para>
<para>
@ -62,10 +56,7 @@ $ nix-shell -p bundix --run 'bundix'
</screen>
<para>
For tools written in Ruby - i.e. where the desire is to install a package and
then execute e.g. <command>rake</command> at the command line, there is an
alternative builder called <literal>bundlerApp</literal>. Set up the
<filename>gemset.nix</filename> the same way, and then, for example:
For tools written in Ruby - i.e. where the desire is to install a package and then execute e.g. <command>rake</command> at the command line, there is an alternative builder called <literal>bundlerApp</literal>. Set up the <filename>gemset.nix</filename> the same way, and then, for example:
</para>
<screen>
@ -87,29 +78,11 @@ bundlerApp {
</screen>
<para>
The chief advantage of <literal>bundlerApp</literal> over
<literal>bundlerEnv</literal> is the executables introduced in the
environment are precisely those selected in the <literal>exes</literal> list,
as opposed to <literal>bundlerEnv</literal> which adds all the executables
made available by gems in the gemset, which can mean e.g.
<command>rspec</command> or <command>rake</command> in unpredictable versions
available from various packages.
The chief advantage of <literal>bundlerApp</literal> over <literal>bundlerEnv</literal> is the executables introduced in the environment are precisely those selected in the <literal>exes</literal> list, as opposed to <literal>bundlerEnv</literal> which adds all the executables made available by gems in the gemset, which can mean e.g. <command>rspec</command> or <command>rake</command> in unpredictable versions available from various packages.
</para>
<para>
Resulting derivations for both builders also have two helpful attributes,
<literal>env</literal> and <literal>wrappedRuby</literal>. The first one
allows one to quickly drop into <command>nix-shell</command> with the
specified environment present. E.g. <command>nix-shell -A sensu.env</command>
would give you an environment with Ruby preset so it has all the libraries
necessary for <literal>sensu</literal> in its paths. The second one can be
used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is a
derivation with <command>ruby</command> wrapped so it can find all the needed
dependencies. For example, to make a derivation <literal>my-script</literal>
for a <filename>my-script.rb</filename> (which should be placed in
<filename>bin</filename>) you should run <command>bundix</command> as
specified above and then use <literal>bundlerEnv</literal> like this:
Resulting derivations for both builders also have two helpful attributes, <literal>env</literal> and <literal>wrappedRuby</literal>. The first one allows one to quickly drop into <command>nix-shell</command> with the specified environment present. E.g. <command>nix-shell -A sensu.env</command> would give you an environment with Ruby preset so it has all the libraries necessary for <literal>sensu</literal> in its paths. The second one can be used to make derivations from custom Ruby scripts which have <filename>Gemfile</filename>s with their dependencies specified. It is a derivation with <command>ruby</command> wrapped so it can find all the needed dependencies. For example, to make a derivation <literal>my-script</literal> for a <filename>my-script.rb</filename> (which should be placed in <filename>bin</filename>) you should run <command>bundix</command> as specified above and then use <literal>bundlerEnv</literal> like this:
</para>
<programlisting>

View File

@ -4,7 +4,7 @@ author: Matthias Beyer
date: 2017-03-05
---
# User's Guide to the Rust Infrastructure
# Rust
To install the rust compiler and cargo put
@ -16,12 +16,6 @@ cargo
into the `environment.systemPackages` or bring them into
scope with `nix-shell -p rustc cargo`.
> If you are using NixOS and you want to use rust without a nix expression you
> probably want to add the following in your `configuration.nix` to build
> crates with C dependencies.
>
> environment.systemPackages = [binutils gcc gnumake openssl pkgconfig]
For daily builds (beta and nightly) use either rustup from
nixpkgs or use the [Rust nightlies
overlay](#using-the-rust-nightlies-overlay).
@ -32,21 +26,21 @@ Rust applications are packaged by using the `buildRustPackage` helper from `rust
```
rustPlatform.buildRustPackage rec {
name = "ripgrep-${version}";
version = "0.4.0";
pname = "ripgrep";
version = "11.0.2";
src = fetchFromGitHub {
owner = "BurntSushi";
repo = "ripgrep";
rev = "${version}";
sha256 = "0y5d1n6hkw85jb3rblcxqas2fp82h3nghssa4xqrhqnz25l799pj";
repo = pname;
rev = version;
sha256 = "1iga3320mgi7m853la55xip514a3chqsdi1a1rwv25lr9b1p7vd3";
};
cargoSha256 = "0q68qyl2h6i0qsz82z840myxlnjay8p1w5z7hfyr8fqp7wgwa9cx";
cargoSha256 = "17ldqr3asrdcsh4l29m3b5r37r5d0b3npq1lrgjmxb6vlx6a36qh";
meta = with stdenv.lib; {
description = "A fast line-oriented regex search tool, similar to ag and ack";
homepage = https://github.com/BurntSushi/ripgrep;
homepage = "https://github.com/BurntSushi/ripgrep";
license = licenses.unlicense;
maintainers = [ maintainers.tailhook ];
platforms = platforms.all;
@ -64,6 +58,21 @@ When the `Cargo.lock`, provided by upstream, is not in sync with the
added in `cargoPatches` will also be prepended to the patches in `patches` at
build-time.
Unless `legacyCargoFetcher` is set to `true`, the fetcher will also verify that
the `Cargo.lock` file is in sync with the `src` attribute, and will compress the
vendor directory into a tar.gz archive.
### Building a crate for a different target
To build your crate with a different cargo `--target` simply specify the `target` attribute:
```nix
pkgs.rustPlatform.buildRustPackage {
(...)
target = "x86_64-fortanix-unknown-sgx";
}
```
## Compiling Rust crates using Nix instead of Cargo
### Simple operation
@ -188,7 +197,7 @@ argument and returns a set that contains all attribute that should be
overwritten.
For more complicated cases, such as when parts of the crate's
derivation depend on the the crate's version, the `attrs` argument of
derivation depend on the crate's version, the `attrs` argument of
the override above can be read, as in the following example, which
patches the derivation:

View File

@ -4,8 +4,7 @@
<title>TeX Live</title>
<para>
Since release 15.09 there is a new TeX Live packaging that lives entirely
under attribute <varname>texlive</varname>.
Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute <varname>texlive</varname>.
</para>
<section xml:id="sec-language-texlive-users-guide">
@ -14,28 +13,23 @@
<itemizedlist>
<listitem>
<para>
For basic usage just pull <varname>texlive.combined.scheme-basic</varname>
for an environment with basic LaTeX support.
For basic usage just pull <varname>texlive.combined.scheme-basic</varname> for an environment with basic LaTeX support.
</para>
</listitem>
<listitem>
<para>
It typically won't work to use separately installed packages together.
Instead, you can build a custom set of packages like this:
It typically won't work to use separately installed packages together. Instead, you can build a custom set of packages like this:
<programlisting>
texlive.combine {
inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
}
</programlisting>
There are all the schemes, collections and a few thousand packages, as
defined upstream (perhaps with tiny differences).
There are all the schemes, collections and a few thousand packages, as defined upstream (perhaps with tiny differences).
</para>
</listitem>
<listitem>
<para>
By default you only get executables and files needed during runtime, and a
little documentation for the core packages. To change that, you need to
add <varname>pkgFilter</varname> function to <varname>combine</varname>.
By default you only get executables and files needed during runtime, and a little documentation for the core packages. To change that, you need to add <varname>pkgFilter</varname> function to <varname>combine</varname>.
<programlisting>
texlive.combine {
# inherit (texlive) whatever-you-want;
@ -59,15 +53,103 @@ nix-repl> texlive.collection-<TAB>
</listitem>
<listitem>
<para>
Note that the wrapper assumes that the result has a chance to be useful.
For example, the core executables should be present, as well as some core
data files. The supported way of ensuring this is by including some
scheme, for example <varname>scheme-basic</varname>, into the combination.
Note that the wrapper assumes that the result has a chance to be useful. For example, the core executables should be present, as well as some core data files. The supported way of ensuring this is by including some scheme, for example <varname>scheme-basic</varname>, into the combination.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-language-texlive-custom-packages">
<title>Custom packages</title>
<para>
You may find that you need to use an external TeX package. A derivation for such package has to provide contents of the "texmf" directory in its output and provide the <varname>tlType</varname> attribute. Here is a (very verbose) example:
<programlisting><![CDATA[
with import <nixpkgs> {};
let
foiltex_run = stdenvNoCC.mkDerivation {
pname = "latex-foiltex";
version = "2.1.4b";
passthru.tlType = "run";
srcs = [
(fetchurl {
url = "http://mirrors.ctan.org/macros/latex/contrib/foiltex/foiltex.dtx";
sha256 = "07frz0krpz7kkcwlayrwrj2a2pixmv0icbngyw92srp9fp23cqpz";
})
(fetchurl {
url = "http://mirrors.ctan.org/macros/latex/contrib/foiltex/foiltex.ins";
sha256 = "09wkyidxk3n3zvqxfs61wlypmbhi1pxmjdi1kns9n2ky8ykbff99";
})
];
unpackPhase = ''
runHook preUnpack
for _src in $srcs; do
cp "$_src" $(stripHash "$_src")
done
runHook postUnpack
'';
nativeBuildInputs = [ texlive.combined.scheme-small ];
dontConfigure = true;
buildPhase = ''
runHook preBuild
# Generate the style files
latex foiltex.ins
runHook postBuild
'';
installPhase = ''
runHook preInstall
path="$out/tex/latex/foiltex"
mkdir -p "$path"
cp *.{cls,def,clo} "$path/"
runHook postInstall
'';
meta = with lib; {
description = "A LaTeX2e class for overhead transparencies";
license = licenses.unfreeRedistributable;
maintainers = with maintainers; [ veprbl ];
platforms = platforms.all;
};
};
foiltex = { pkgs = [ foiltex_run ]; };
latex_with_foiltex = texlive.combine {
inherit (texlive) scheme-small;
inherit foiltex;
};
in
runCommand "test.pdf" {
nativeBuildInputs = [ latex_with_foiltex ];
} ''
cat >test.tex <<EOF
\documentclass{foils}
\title{Presentation title}
\date{}
\begin{document}
\maketitle
\end{document}
EOF
pdflatex test.tex
cp test.pdf $out
''
]]></programlisting>
</para>
</section>
<section xml:id="sec-language-texlive-known-problems">
<title>Known problems</title>
@ -84,14 +166,12 @@ nix-repl> texlive.collection-<TAB>
</listitem>
<listitem>
<para>
feature/bug: when a package is rejected by <varname>pkgFilter</varname>,
its dependencies are still propagated;
feature/bug: when a package is rejected by <varname>pkgFilter</varname>, its dependencies are still propagated;
</para>
</listitem>
<listitem>
<para>
in case of any bugs or feature requests, file a github issue or better a
pull request and /cc @vcunat.
in case of any bugs or feature requests, file a github issue or better a pull request and /cc @vcunat.
</para>
</listitem>
</itemizedlist>

View File

@ -3,7 +3,7 @@ title: User's Guide for Vim in Nixpkgs
author: Marc Weber
date: 2016-06-25
---
# User's Guide to Vim Plugins/Addons/Bundles/Scripts in Nixpkgs
# Vim
Both Neovim and Vim can be configured to include your favorite plugins
and additional libraries.

View File

@ -5,21 +5,37 @@
<subtitle>Version <xi:include href=".version" parse="text" />
</subtitle>
</info>
<xi:include href="introduction.chapter.xml" />
<xi:include href="quick-start.xml" />
<xi:include href="package-specific-user-notes.xml" />
<xi:include href="stdenv.xml" />
<xi:include href="multiple-output.xml" />
<xi:include href="cross-compilation.xml" />
<xi:include href="configuration.xml" />
<xi:include href="functions.xml" />
<xi:include href="meta.xml" />
<xi:include href="languages-frameworks/index.xml" />
<xi:include href="platform-notes.xml" />
<xi:include href="package-notes.xml" />
<xi:include href="overlays.xml" />
<xi:include href="coding-conventions.xml" />
<xi:include href="submitting-changes.xml" />
<xi:include href="reviewing-contributions.xml" />
<xi:include href="contributing.xml" />
<xi:include href="preface.chapter.xml" />
<part>
<title>Using Nixpkgs</title>
<xi:include href="using/configuration.xml" />
<xi:include href="using/overlays.xml" />
<xi:include href="using/overrides.xml" />
<xi:include href="functions.xml" />
</part>
<part>
<title>Standard environment</title>
<xi:include href="stdenv/stdenv.xml" />
<xi:include href="stdenv/meta.xml" />
<xi:include href="stdenv/multiple-output.xml" />
<xi:include href="stdenv/cross-compilation.xml" />
<xi:include href="stdenv/platform-notes.xml" />
</part>
<part>
<title>Builders</title>
<xi:include href="builders/fetchers.xml" />
<xi:include href="builders/trivial-builders.xml" />
<xi:include href="builders/special.xml" />
<xi:include href="builders/images.xml" />
<xi:include href="languages-frameworks/index.xml" />
<xi:include href="builders/packages/index.xml" />
</part>
<part>
<title>Contributing to Nixpkgs</title>
<xi:include href="contributing/quick-start.xml" />
<xi:include href="contributing/coding-conventions.xml" />
<xi:include href="contributing/submitting-changes.xml" />
<xi:include href="contributing/reviewing-contributions.xml" />
<xi:include href="contributing/contributing-to-documentation.xml" />
</part>
</book>

View File

@ -1,442 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-meta">
<title>Meta-attributes</title>
<para>
Nix packages can declare <emphasis>meta-attributes</emphasis> that contain
information about a package such as a description, its homepage, its license,
and so on. For instance, the GNU Hello package has a <varname>meta</varname>
declaration like this:
<programlisting>
meta = with stdenv.lib; {
description = "A program that produces a familiar, friendly greeting";
longDescription = ''
GNU Hello is a program that prints "Hello, world!" when you run it.
It is fully customizable.
'';
homepage = https://www.gnu.org/software/hello/manual/;
license = licenses.gpl3Plus;
maintainers = [ maintainers.eelco ];
platforms = platforms.all;
};
</programlisting>
</para>
<para>
Meta-attributes are not passed to the builder of the package. Thus, a change
to a meta-attribute doesnt trigger a recompilation of the package. The
value of a meta-attribute must be a string.
</para>
<para>
The meta-attributes of a package can be queried from the command-line using
<command>nix-env</command>:
<screen>
<prompt>$ </prompt>nix-env -qa hello --json
{
"hello": {
"meta": {
"description": "A program that produces a familiar, friendly greeting",
"homepage": "https://www.gnu.org/software/hello/manual/",
"license": {
"fullName": "GNU General Public License version 3 or later",
"shortName": "GPLv3+",
"url": "http://www.fsf.org/licensing/licenses/gpl.html"
},
"longDescription": "GNU Hello is a program that prints \"Hello, world!\" when you run it.\nIt is fully customizable.\n",
"maintainers": [
"Ludovic Court\u00e8s &lt;ludo@gnu.org>"
],
"platforms": [
"i686-linux",
"x86_64-linux",
"armv5tel-linux",
"armv7l-linux",
"mips32-linux",
"x86_64-darwin",
"i686-cygwin",
"i686-freebsd",
"x86_64-freebsd",
"i686-openbsd",
"x86_64-openbsd"
],
"position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/default.nix:14"
},
"name": "hello-2.9",
"system": "x86_64-linux"
}
}
</screen>
<command>nix-env</command> knows about the <varname>description</varname>
field specifically:
<screen>
<prompt>$ </prompt>nix-env -qa hello --description
hello-2.3 A program that produces a familiar, friendly greeting
</screen>
</para>
<section xml:id="sec-standard-meta-attributes">
<title>Standard meta-attributes</title>
<para>
It is expected that each meta-attribute is one of the following:
</para>
<variablelist>
<varlistentry>
<term>
<varname>description</varname>
</term>
<listitem>
<para>
A short (one-line) description of the package. This is shown by
<command>nix-env -q --description</command> and also on the Nixpkgs
release pages.
</para>
<para>
Dont include a period at the end. Dont include newline characters.
Capitalise the first character. For brevity, dont repeat the name of
package — just describe what it does.
</para>
<para>
Wrong: <literal>"libpng is a library that allows you to decode PNG
images."</literal>
</para>
<para>
Right: <literal>"A library for decoding PNG images"</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>longDescription</varname>
</term>
<listitem>
<para>
An arbitrarily long description of the package.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>branch</varname>
</term>
<listitem>
<para>
Release branch. Used to specify that a package is not going to receive
updates that are not in this branch; for example, Linux kernel 3.0 is
supposed to be updated to 3.0.X, not 3.1.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>homepage</varname>
</term>
<listitem>
<para>
The packages homepage. Example:
<literal>https://www.gnu.org/software/hello/manual/</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>downloadPage</varname>
</term>
<listitem>
<para>
The page where a link to the current version can be found. Example:
<literal>https://ftp.gnu.org/gnu/hello/</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>changelog</varname>
</term>
<listitem>
<para>
A link or a list of links to the location of Changelog for a package.
A link may use expansion to refer to the correct changelog version.
Example:
<literal>"https://git.savannah.gnu.org/cgit/hello.git/plain/NEWS?h=v${version}"</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>license</varname>
</term>
<listitem>
<para>
The license, or licenses, for the package. One from the attribute set
defined in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
<filename>nixpkgs/lib/licenses.nix</filename></link>. At this moment
using both a list of licenses and a single license is valid. If the
license field is in the form of a list representation, then it means that
parts of the package are licensed differently. Each license should
preferably be referenced by their attribute. The non-list attribute value
can also be a space delimited string representation of the contained
attribute shortNames or spdxIds. The following are all valid examples:
<itemizedlist>
<listitem>
<para>
Single license referenced by attribute (preferred)
<literal>stdenv.lib.licenses.gpl3</literal>.
</para>
</listitem>
<listitem>
<para>
Single license referenced by its attribute shortName (frowned upon)
<literal>"gpl3"</literal>.
</para>
</listitem>
<listitem>
<para>
Single license referenced by its attribute spdxId (frowned upon)
<literal>"GPL-3.0"</literal>.
</para>
</listitem>
<listitem>
<para>
Multiple licenses referenced by attribute (preferred) <literal>with
stdenv.lib.licenses; [ asl20 free ofl ]</literal>.
</para>
</listitem>
<listitem>
<para>
Multiple licenses referenced as a space delimited string of attribute
shortNames (frowned upon) <literal>"asl20 free ofl"</literal>.
</para>
</listitem>
</itemizedlist>
For details, see <xref linkend='sec-meta-license'/>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>maintainers</varname>
</term>
<listitem>
<para>
A list of names and e-mail addresses of the maintainers of this Nix
expression. If you would like to be a maintainer of a package, you may
want to add yourself to
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/maintainers/maintainer-list.nix"><filename>nixpkgs/maintainers/maintainer-list.nix</filename></link>
and write something like <literal>[ stdenv.lib.maintainers.alice
stdenv.lib.maintainers.bob ]</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>priority</varname>
</term>
<listitem>
<para>
The <emphasis>priority</emphasis> of the package, used by
<command>nix-env</command> to resolve file name conflicts between
packages. See the Nix manual page for <command>nix-env</command> for
details. Example: <literal>"10"</literal> (a low-priority package).
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>platforms</varname>
</term>
<listitem>
<para>
The list of Nix platform types on which the package is supported. Hydra
builds packages according to the platform specified. If no platform is
specified, the package does not have prebuilt binaries. An example is:
<programlisting>
meta.platforms = stdenv.lib.platforms.linux;
</programlisting>
Attribute Set <varname>stdenv.lib.platforms</varname> defines
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/systems/doubles.nix">
various common lists</link> of platforms types.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>tests</varname>
</term>
<listitem>
<warning>
<para>
This attribute is special in that it is not actually under the
<literal>meta</literal> attribute set but rather under the
<literal>passthru</literal> attribute set. This is due to how
<literal>meta</literal> attributes work, and the fact that they
are supposed to contain only metadata, not derivations.
</para>
</warning>
<para>
An attribute set with as values tests. A test is a derivation, which
builds successfully when the test passes, and fails to build otherwise. A
derivation that is a test needs to have <literal>meta.timeout</literal>
defined.
</para>
<para>
The NixOS tests are available as <literal>nixosTests</literal> in
parameters of derivations. For instance, the OpenSMTPD derivation
includes lines similar to:
<programlisting>
{ /* ... */, nixosTests }:
{
# ...
passthru.tests = {
basic-functionality-and-dovecot-integration = nixosTests.opensmtpd;
};
}
</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>timeout</varname>
</term>
<listitem>
<para>
A timeout (in seconds) for building the derivation. If the derivation
takes longer than this time to build, it can fail due to breaking the
timeout. However, all computers do not have the same computing power,
hence some builders may decide to apply a multiplicative factor to this
value. When filling this value in, try to keep it approximately
consistent with other values already present in
<literal>nixpkgs</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>hydraPlatforms</varname>
</term>
<listitem>
<para>
The list of Nix platform types for which the Hydra instance at
<literal>hydra.nixos.org</literal> will build the package. (Hydra is the
Nix-based continuous build system.) It defaults to the value of
<varname>meta.platforms</varname>. Thus, the only reason to set
<varname>meta.hydraPlatforms</varname> is if you want
<literal>hydra.nixos.org</literal> to build the package on a subset of
<varname>meta.platforms</varname>, or not at all, e.g.
<programlisting>
meta.platforms = stdenv.lib.platforms.linux;
meta.hydraPlatforms = [];
</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>broken</varname>
</term>
<listitem>
<para>
If set to <literal>true</literal>, the package is marked as “broken”,
meaning that it wont show up in <literal>nix-env -qa</literal>, and
cannot be built or installed. Such packages should be removed from
Nixpkgs eventually unless they are fixed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>updateWalker</varname>
</term>
<listitem>
<para>
If set to <literal>true</literal>, the package is tested to be updated
correctly by the <literal>update-walker.sh</literal> script without
additional settings. Such packages have <varname>meta.version</varname>
set and their homepage (or the page specified by
<varname>meta.downloadPage</varname>) contains a direct link to the
package tarball.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="sec-meta-license">
<title>Licenses</title>
<para>
The <varname>meta.license</varname> attribute should preferrably contain a
value from <varname>stdenv.lib.licenses</varname> defined in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
<filename>nixpkgs/lib/licenses.nix</filename></link>, or in-place license
description of the same format if the license is unlikely to be useful in
another expression.
</para>
<para>
Although it's typically better to indicate the specific license, a few
generic options are available:
<variablelist>
<varlistentry>
<term>
<varname>stdenv.lib.licenses.free</varname>, <varname>"free"</varname>
</term>
<listitem>
<para>
Catch-all for free software licenses not listed above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>stdenv.lib.licenses.unfreeRedistributable</varname>, <varname>"unfree-redistributable"</varname>
</term>
<listitem>
<para>
Unfree package that can be redistributed in binary form. That is, its
legal to redistribute the <emphasis>output</emphasis> of the derivation.
This means that the package can be included in the Nixpkgs channel.
</para>
<para>
Sometimes proprietary software can only be redistributed unmodified.
Make sure the builder doesnt actually modify the original binaries;
otherwise were breaking the license. For instance, the NVIDIA X11
drivers can be redistributed unmodified, but our builder applies
<command>patchelf</command> to make them work. Thus, its license is
<varname>"unfree"</varname> and it cannot be included in the Nixpkgs
channel.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>stdenv.lib.licenses.unfree</varname>, <varname>"unfree"</varname>
</term>
<listitem>
<para>
Unfree package that cannot be redistributed. You can build it yourself,
but you cannot redistribute the output of the derivation. Thus it cannot
be included in the Nixpkgs channel.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>stdenv.lib.licenses.unfreeRedistributableFirmware</varname>, <varname>"unfree-redistributable-firmware"</varname>
</term>
<listitem>
<para>
This package supplies unfree, redistributable firmware. This is a
separate value from <varname>unfree-redistributable</varname> because
not everybody cares whether firmware is free.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
</chapter>

View File

@ -1,330 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY ndash "&#x2013;"> <!-- @vcunat likes to use this one ;-) -->
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-multiple-output">
<title>Multiple-output packages</title>
<section xml:id="sec-multiple-outputs-introduction">
<title>Introduction</title>
<para>
The Nix language allows a derivation to produce multiple outputs, which is
similar to what is utilized by other Linux distribution packaging systems.
The outputs reside in separate Nix store paths, so they can be mostly
handled independently of each other, including passing to build inputs,
garbage collection or binary substitution. The exception is that building
from source always produces all the outputs.
</para>
<para>
The main motivation is to save disk space by reducing runtime closure sizes;
consequently also sizes of substituted binaries get reduced. Splitting can
be used to have more granular runtime dependencies, for example the typical
reduction is to split away development-only files, as those are typically
not needed during runtime. As a result, closure sizes of many packages can
get reduced to a half or even much less.
</para>
<note>
<para>
The reduction effects could be instead achieved by building the parts in
completely separate derivations. That would often additionally reduce
build-time closures, but it tends to be much harder to write such
derivations, as build systems typically assume all parts are being built at
once. This compromise approach of single source package producing multiple
binary packages is also utilized often by rpm and deb.
</para>
</note>
</section>
<section xml:id="sec-multiple-outputs-installing">
<title>Installing a split package</title>
<para>
When installing a package via <varname>systemPackages</varname> or
<command>nix-env</command> you have several options:
</para>
<itemizedlist>
<listitem>
<para>
You can install particular outputs explicitly, as each is available in the
Nix language as an attribute of the package. The
<varname>outputs</varname> attribute contains a list of output names.
</para>
</listitem>
<listitem>
<para>
You can let it use the default outputs. These are handled by
<varname>meta.outputsToInstall</varname> attribute that contains a list of
output names.
</para>
<para>
TODO: more about tweaking the attribute, etc.
</para>
</listitem>
<listitem>
<para>
NixOS provides configuration option
<varname>environment.extraOutputsToInstall</varname> that allows adding
extra outputs of <varname>environment.systemPackages</varname> atop the
default ones. It's mainly meant for documentation and debug symbols, and
it's also modified by specific options.
</para>
<note>
<para>
At this moment there is no similar configurability for packages installed
by <command>nix-env</command>. You can still use approach from
<xref linkend="sec-modify-via-packageOverrides" /> to override
<varname>meta.outputsToInstall</varname> attributes, but that's a rather
inconvenient way.
</para>
</note>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-multiple-outputs-using-split-packages">
<title>Using a split package</title>
<para>
In the Nix language the individual outputs can be reached explicitly as
attributes, e.g. <varname>coreutils.info</varname>, but the typical case is
just using packages as build inputs.
</para>
<para>
When a multiple-output derivation gets into a build input of another
derivation, the <varname>dev</varname> output is added if it exists,
otherwise the first output is added. In addition to that,
<varname>propagatedBuildOutputs</varname> of that package which by default
contain <varname>$outputBin</varname> and <varname>$outputLib</varname> are
also added. (See <xref linkend="multiple-output-file-type-groups" />.)
</para>
<para>
In some cases it may be desirable to combine different outputs under a
single store path. A function <literal>symlinkJoin</literal> can be used to
do this. (Note that it may negate some closure size benefits of using a
multiple-output package.)
</para>
</section>
<section xml:id="sec-multiple-outputs-">
<title>Writing a split derivation</title>
<para>
Here you find how to write a derivation that produces multiple outputs.
</para>
<para>
In nixpkgs there is a framework supporting multiple-output derivations. It
tries to cover most cases by default behavior. You can find the source
separated in
&lt;<filename>nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh</filename>&gt;;
it's relatively well-readable. The whole machinery is triggered by defining
the <varname>outputs</varname> attribute to contain the list of desired
output names (strings).
</para>
<programlisting>outputs = [ "bin" "dev" "out" "doc" ];</programlisting>
<para>
Often such a single line is enough. For each output an equally named
environment variable is passed to the builder and contains the path in nix
store for that output. Typically you also want to have the main
<varname>out</varname> output, as it catches any files that didn't get
elsewhere.
</para>
<note>
<para>
There is a special handling of the <varname>debug</varname> output,
described at <xref linkend="stdenv-separateDebugInfo" />.
</para>
</note>
<section xml:id="multiple-output-file-binaries-first-convention">
<title><quote>Binaries first</quote></title>
<para>
A commonly adopted convention in <literal>nixpkgs</literal> is that
executables provided by the package are contained within its first output.
This convention allows the dependent packages to reference the executables
provided by packages in a uniform manner. For instance, provided with the
knowledge that the <literal>perl</literal> package contains a
<literal>perl</literal> executable it can be referenced as
<literal>${pkgs.perl}/bin/perl</literal> within a Nix derivation that needs
to execute a Perl script.
</para>
<para>
The <literal>glibc</literal> package is a deliberate single exception to
the <quote>binaries first</quote> convention. The <literal>glibc</literal>
has <literal>libs</literal> as its first output allowing the libraries
provided by <literal>glibc</literal> to be referenced directly (e.g.
<literal>${stdenv.glibc}/lib/ld-linux-x86-64.so.2</literal>). The
executables provided by <literal>glibc</literal> can be accessed via its
<literal>bin</literal> attribute (e.g.
<literal>${stdenv.glibc.bin}/bin/ldd</literal>).
</para>
<para>
The reason for why <literal>glibc</literal> deviates from the convention is
because referencing a library provided by <literal>glibc</literal> is a
very common operation among Nix packages. For instance, third-party
executables packaged by Nix are typically patched and relinked with the
relevant version of <literal>glibc</literal> libraries from Nix packages
(please see the documentation on
<link xlink:href="https://nixos.org/patchelf.html">patchelf</link> for more
details).
</para>
</section>
<section xml:id="multiple-output-file-type-groups">
<title>File type groups</title>
<para>
The support code currently recognizes some particular kinds of outputs and
either instructs the build system of the package to put files into their
desired outputs or it moves the files during the fixup phase. Each group of
file types has an <varname>outputFoo</varname> variable specifying the
output name where they should go. If that variable isn't defined by the
derivation writer, it is guessed &ndash; a default output name is defined,
falling back to other possibilities if the output isn't defined.
</para>
<variablelist>
<varlistentry>
<term>
<varname> $outputDev</varname>
</term>
<listitem>
<para>
is for development-only files. These include C(++) headers, pkg-config,
cmake and aclocal files. They go to <varname>dev</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputBin</varname>
</term>
<listitem>
<para>
is meant for user-facing binaries, typically residing in bin/. They go
to <varname>bin</varname> or <varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputLib</varname>
</term>
<listitem>
<para>
is meant for libraries, typically residing in <filename>lib/</filename>
and <filename>libexec/</filename>. They go to <varname>lib</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDoc</varname>
</term>
<listitem>
<para>
is for user documentation, typically residing in
<filename>share/doc/</filename>. It goes to <varname>doc</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDevdoc</varname>
</term>
<listitem>
<para>
is for <emphasis>developer</emphasis> documentation. Currently we count
gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname>
or is removed (!) by default. This is because e.g. gtk-doc tends to be
rather large and completely unused by nixpkgs users.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputMan</varname>
</term>
<listitem>
<para>
is for man pages (except for section 3). They go to
<varname>man</varname> or <varname>$outputBin</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDevman</varname>
</term>
<listitem>
<para>
is for section 3 man pages. They go to <varname>devman</varname> or
<varname>$outputMan</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputInfo</varname>
</term>
<listitem>
<para>
is for info pages. They go to <varname>info</varname> or
<varname>$outputBin</varname> by default.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="sec-multiple-outputs-caveats">
<title>Common caveats</title>
<itemizedlist>
<listitem>
<para>
Some configure scripts don't like some of the parameters passed by
default by the framework, e.g. <literal>--docdir=/foo/bar</literal>. You
can disable this by setting <literal>setOutputFlags = false;</literal>.
</para>
</listitem>
<listitem>
<para>
The outputs of a single derivation can retain references to each other,
but note that circular references are not allowed. (And each
strongly-connected component would act as a single output anyway.)
</para>
</listitem>
<listitem>
<para>
Most of split packages contain their core functionality in libraries.
These libraries tend to refer to various kind of data that typically gets
into <varname>out</varname>, e.g. locale strings, so there is often no
advantage in separating the libraries into <varname>lib</varname>, as
keeping them in <varname>out</varname> is easier.
</para>
</listitem>
<listitem>
<para>
Some packages have hidden assumptions on install paths, which complicates
splitting.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<!--Writing a split derivation-->
</chapter>

View File

@ -1,195 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-overlays">
<title>Overlays</title>
<para>
This chapter describes how to extend and change Nixpkgs using overlays.
Overlays are used to add layers in the fixed-point used by Nixpkgs to compose
the set of all packages.
</para>
<para>
Nixpkgs can be configured with a list of overlays, which are applied in
order. This means that the order of the overlays can be significant if
multiple layers override the same package.
</para>
<!--============================================================-->
<section xml:id="sec-overlays-install">
<title>Installing overlays</title>
<para>
The list of overlays can be set either explicitly in a Nix expression, or
through <literal>&lt;nixpkgs-overlays></literal> or user configuration
files.
</para>
<section xml:id="sec-overlays-argument">
<title>Set overlays in NixOS or Nix expressions</title>
<para>
On a NixOS system the value of the <literal>nixpkgs.overlays</literal>
option, if present, is passed to the system Nixpkgs directly as an
argument. Note that this does not affect the overlays for non-NixOS
operations (e.g. <literal>nix-env</literal>), which are
<link xlink:href="#sec-overlays-lookup">looked</link> up independently.
</para>
<para>
The list of overlays can be passed explicitly when importing nixpkgs, for
example <literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ];
}</literal>.
</para>
<para>
Further overlays can be added by calling the <literal>pkgs.extend</literal>
or <literal>pkgs.appendOverlays</literal>, although it is often preferable
to avoid these functions, because they recompute the Nixpkgs fixpoint,
which is somewhat expensive to do.
</para>
</section>
<section xml:id="sec-overlays-lookup">
<title>Install overlays via configuration lookup</title>
<para>
The list of overlays is determined as follows.
</para>
<para>
<orderedlist>
<listitem>
<para>
First, if an
<link xlink:href="#sec-overlays-argument"><varname>overlays</varname>
argument</link> to the Nixpkgs function itself is given, then that is
used and no path lookup will be performed.
</para>
</listitem>
<listitem>
<para>
Otherwise, if the Nix path entry
<literal>&lt;nixpkgs-overlays></literal> exists, we look for overlays at
that path, as described below.
</para>
<para>
See the section on <literal>NIX_PATH</literal> in the Nix manual for
more details on how to set a value for
<literal>&lt;nixpkgs-overlays>.</literal>
</para>
</listitem>
<listitem>
<para>
If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<filename>~/.config/nixpkgs/overlays/</filename> exists, then we look
for overlays at that path, as described below. It is an error if both
exist.
</para>
</listitem>
</orderedlist>
</para>
<para>
If we are looking for overlays at a path, then there are two cases:
<itemizedlist>
<listitem>
<para>
If the path is a file, then the file is imported as a Nix expression and
used as the list of overlays.
</para>
</listitem>
<listitem>
<para>
If the path is a directory, then we take the content of the directory,
order it lexicographically, and attempt to interpret each as an overlay
by:
<itemizedlist>
<listitem>
<para>
Importing the file, if it is a <literal>.nix</literal> file.
</para>
</listitem>
<listitem>
<para>
Importing a top-level <filename>default.nix</filename> file, if it is
a directory.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
<para>
Because overlays that are set in NixOS configuration do not affect
non-NixOS operations such as <literal>nix-env</literal>, the
<filename>overlays.nix</filename> option provides a convenient way to use
the same overlays for a NixOS system configuration and user configuration:
the same file can be used as <filename>overlays.nix</filename> and imported
as the value of <literal>nixpkgs.overlays</literal>.
</para>
<!-- TODO: Example of sharing overlays between NixOS configuration
and configuration lookup. Also reference the example
from the sec-overlays-argument paragraph about NixOS.
-->
</section>
</section>
<!--============================================================-->
<section xml:id="sec-overlays-definition">
<title>Defining overlays</title>
<para>
Overlays are Nix functions which accept two arguments, conventionally called
<varname>self</varname> and <varname>super</varname>, and return a set of
packages. For example, the following is a valid overlay.
</para>
<programlisting>
self: super:
{
boost = super.boost.override {
python = self.python3;
};
rr = super.callPackage ./pkgs/rr {
stdenv = self.stdenv_32bit;
};
}
</programlisting>
<para>
The first argument (<varname>self</varname>) corresponds to the final
package set. You should use this set for the dependencies of all packages
specified in your overlay. For example, all the dependencies of
<varname>rr</varname> in the example above come from
<varname>self</varname>, as well as the overridden dependencies used in the
<varname>boost</varname> override.
</para>
<para>
The second argument (<varname>super</varname>) corresponds to the result of
the evaluation of the previous stages of Nixpkgs. It does not contain any of
the packages added by the current overlay, nor any of the following
overlays. This set should be used either to refer to packages you wish to
override, or to access functions defined in Nixpkgs. For example, the
original recipe of <varname>boost</varname> in the above example, comes from
<varname>super</varname>, as well as the <varname>callPackage</varname>
function.
</para>
<para>
The value returned by this function should be a set similar to
<filename>pkgs/top-level/all-packages.nix</filename>, containing overridden
and/or new packages.
</para>
<para>
Overlays are similar to other methods for customizing Nixpkgs, in particular
the <literal>packageOverrides</literal> attribute described in
<xref linkend="sec-modify-via-packageOverrides"/>. Indeed,
<literal>packageOverrides</literal> acts as an overlay with only the
<varname>super</varname> argument. It is therefore appropriate for basic
use, but overlays are more powerful and easier to distribute.
</para>
</section>
</chapter>

View File

@ -1,9 +1,22 @@
.docbook .xref img[src^=images\/callouts\/],
.screen img,
.programlisting img {
.programlisting img,
.literallayout img,
.synopsis img {
width: 1em;
}
.calloutlist img {
width: 1.5em;
}
.prompt,
.screen img,
.programlisting img,
.literallayout img,
.synopsis img {
-moz-user-select: none;
-webkit-user-select: none;
-ms-user-select: none;
user-select: none;
}

View File

@ -1,590 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-package-notes">
<title>Package Notes</title>
<para>
This chapter contains information about how to use and maintain the Nix
expressions for a number of specific packages, such as the Linux kernel or
X.org.
</para>
<!--============================================================-->
<section xml:id="sec-linux-kernel">
<title>Linux kernel</title>
<para>
The Nix expressions to build the Linux kernel are in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel"><filename>pkgs/os-specific/linux/kernel</filename></link>.
</para>
<para>
The function that builds the kernel has an argument
<varname>kernelPatches</varname> which should be a list of <literal>{name,
patch, extraConfig}</literal> attribute sets, where <varname>name</varname>
is the name of the patch (which is included in the kernels
<varname>meta.description</varname> attribute), <varname>patch</varname> is
the patch itself (possibly compressed), and <varname>extraConfig</varname>
(optional) is a string specifying extra options to be concatenated to the
kernel configuration file (<filename>.config</filename>).
</para>
<para>
The kernel derivation exports an attribute <varname>features</varname>
specifying whether optional functionality is or isnt enabled. This is
used in NixOS to implement kernel-specific behaviour. For instance, if the
kernel has the <varname>iwlwifi</varname> feature (i.e. has built-in support
for Intel wireless chipsets), then NixOS doesnt have to build the
external <varname>iwlwifi</varname> package:
<programlisting>
modulesTree = [kernel]
++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi
++ ...;
</programlisting>
</para>
<para>
How to add a new (major) version of the Linux kernel to Nixpkgs:
<orderedlist>
<listitem>
<para>
Copy the old Nix expression (e.g. <filename>linux-2.6.21.nix</filename>)
to the new one (e.g. <filename>linux-2.6.22.nix</filename>) and update
it.
</para>
</listitem>
<listitem>
<para>
Add the new kernel to <filename>all-packages.nix</filename> (e.g., create
an attribute <varname>kernel_2_6_22</varname>).
</para>
</listitem>
<listitem>
<para>
Now were going to update the kernel configuration. First unpack the
kernel. Then for each supported platform (<literal>i686</literal>,
<literal>x86_64</literal>, <literal>uml</literal>) do the following:
<orderedlist>
<listitem>
<para>
Make an copy from the old config (e.g.
<filename>config-2.6.21-i686-smp</filename>) to the new one (e.g.
<filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
<listitem>
<para>
Copy the config file for this platform (e.g.
<filename>config-2.6.22-i686-smp</filename>) to
<filename>.config</filename> in the kernel source tree.
</para>
</listitem>
<listitem>
<para>
Run <literal>make oldconfig
ARCH=<replaceable>{i386,x86_64,um}</replaceable></literal> and answer
all questions. (For the uml configuration, also add
<literal>SHELL=bash</literal>.) Make sure to keep the configuration
consistent between platforms (i.e. dont enable some feature on
<literal>i686</literal> and disable it on <literal>x86_64</literal>).
</para>
</listitem>
<listitem>
<para>
If needed you can also run <literal>make menuconfig</literal>:
<screen>
<prompt>$ </prompt>nix-env -i ncurses
<prompt>$ </prompt>export NIX_CFLAGS_LINK=-lncurses
<prompt>$ </prompt>make menuconfig ARCH=<replaceable>arch</replaceable></screen>
</para>
</listitem>
<listitem>
<para>
Copy <filename>.config</filename> over the new config file (e.g.
<filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
</orderedlist>
</para>
</listitem>
<listitem>
<para>
Test building the kernel: <literal>nix-build -A kernel_2_6_22</literal>.
If it compiles, ship it! For extra credit, try booting NixOS with it.
</para>
</listitem>
<listitem>
<para>
It may be that the new kernel requires updating the external kernel
modules and kernel-dependent packages listed in the
<varname>linuxPackagesFor</varname> function in
<filename>all-packages.nix</filename> (such as the NVIDIA drivers, AUFS,
etc.). If the updated packages arent backwards compatible with older
kernels, you may need to keep the older versions around.
</para>
</listitem>
</orderedlist>
</para>
</section>
<!--============================================================-->
<section xml:id="sec-xorg">
<title>X.org</title>
<para>
The Nix expressions for the X.org packages reside in
<filename>pkgs/servers/x11/xorg/default.nix</filename>. This file is
automatically generated from lists of tarballs in an X.org release. As such
it should not be modified directly; rather, you should modify the lists, the
generator script or the file
<filename>pkgs/servers/x11/xorg/overrides.nix</filename>, in which you can
override or add to the derivations produced by the generator.
</para>
<para>
The generator is invoked as follows:
<screen>
<prompt>$ </prompt>cd pkgs/servers/x11/xorg
<prompt>$ </prompt>cat tarballs-7.5.list extra.list old.list \
| perl ./generate-expr-from-tarballs.pl
</screen>
For each of the tarballs in the <filename>.list</filename> files, the script
downloads it, unpacks it, and searches its <filename>configure.ac</filename>
and <filename>*.pc.in</filename> files for dependencies. This information is
used to generate <filename>default.nix</filename>. The generator caches
downloaded tarballs between runs. Pay close attention to the <literal>NOT
FOUND: <replaceable>name</replaceable></literal> messages at the end of the
run, since they may indicate missing dependencies. (Some might be optional
dependencies, however.)
</para>
<para>
A file like <filename>tarballs-7.5.list</filename> contains all tarballs in
a X.org release. It can be generated like this:
<screen>
<prompt>$ </prompt>export i="mirror://xorg/X11R7.4/src/everything/"
<prompt>$ </prompt>cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
| perl -e 'while (&lt;>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
| sort > tarballs-7.4.list
</screen>
<filename>extra.list</filename> contains libraries that arent part of
X.org proper, but are closely related to it, such as
<literal>libxcb</literal>. <filename>old.list</filename> contains some
packages that were removed from X.org, but are still needed by some people
or by other packages (such as <varname>imake</varname>).
</para>
<para>
If the expression for a package requires derivation attributes that the
generator cannot figure out automatically (say, <varname>patches</varname>
or a <varname>postInstall</varname> hook), you should modify
<filename>pkgs/servers/x11/xorg/overrides.nix</filename>.
</para>
</section>
<!--============================================================-->
<!--
<section xml:id="sec-package-notes-gnome">
<title>Gnome</title>
<para>* Expression is auto-generated</para>
<para>* How to update</para>
</section>
-->
<!--============================================================-->
<!--
<section xml:id="sec-package-notes-gcc">
<title>GCC</title>
<para></para>
</section>
-->
<!--============================================================-->
<section xml:id="sec-eclipse">
<title>Eclipse</title>
<para>
The Nix expressions related to the Eclipse platform and IDE are in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse"><filename>pkgs/applications/editors/eclipse</filename></link>.
</para>
<para>
Nixpkgs provides a number of packages that will install Eclipse in its
various forms. These range from the bare-bones Eclipse Platform to the more
fully featured Eclipse SDK or Scala-IDE packages and multiple version are
often available. It is possible to list available Eclipse packages by
issuing the command:
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses --description
</screen>
Once an Eclipse variant is installed it can be run using the
<command>eclipse</command> command, as expected. From within Eclipse it is
then possible to install plugins in the usual manner by either manually
specifying an Eclipse update site or by installing the Marketplace Client
plugin and using it to discover and install other plugins. This installation
method provides an Eclipse installation that closely resemble a manually
installed Eclipse.
</para>
<para>
If you prefer to install plugins in a more declarative manner then Nixpkgs
also offer a number of Eclipse plugins that can be installed in an
<emphasis>Eclipse environment</emphasis>. This type of environment is
created using the function <varname>eclipseWithPlugins</varname> found
inside the <varname>nixpkgs.eclipses</varname> attribute set. This function
takes as argument <literal>{ eclipse, plugins ? [], jvmArgs ? [] }</literal>
where <varname>eclipse</varname> is a one of the Eclipse packages described
above, <varname>plugins</varname> is a list of plugin derivations, and
<varname>jvmArgs</varname> is a list of arguments given to the JVM running
the Eclipse. For example, say you wish to install the latest Eclipse
Platform with the popular Eclipse Color Theme plugin and also allow Eclipse
to use more RAM. You could then add
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [ plugins.color-theme ];
};
}
</screen>
to your Nixpkgs configuration
(<filename>~/.config/nixpkgs/config.nix</filename>) and install it by
running <command>nix-env -f '&lt;nixpkgs&gt;' -iA myEclipse</command> and
afterward run Eclipse as usual. It is possible to find out which plugins are
available for installation using <varname>eclipseWithPlugins</varname> by
running
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses.plugins --description
</screen>
</para>
<para>
If there is a need to install plugins that are not available in Nixpkgs then
it may be possible to define these plugins outside Nixpkgs using the
<varname>buildEclipseUpdateSite</varname> and
<varname>buildEclipsePlugin</varname> functions found in the
<varname>nixpkgs.eclipses.plugins</varname> attribute set. Use the
<varname>buildEclipseUpdateSite</varname> function to install a plugin
distributed as an Eclipse update site. This function takes <literal>{ name,
src }</literal> as argument where <literal>src</literal> indicates the
Eclipse update site archive. All Eclipse features and plugins within the
downloaded update site will be installed. When an update site archive is not
available then the <varname>buildEclipsePlugin</varname> function can be
used to install a plugin that consists of a pair of feature and plugin JARs.
This function takes an argument <literal>{ name, srcFeature, srcPlugin
}</literal> where <literal>srcFeature</literal> and
<literal>srcPlugin</literal> are the feature and plugin JARs, respectively.
</para>
<para>
Expanding the previous example with two plugins using the above functions we
have
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http://…/features/myplugin1.jar";
sha256 = "123…";
};
srcPlugin = fetchurl {
url = "http://…/plugins/myplugin1.jar";
sha256 = "123…";
};
});
(plugins.buildEclipseUpdateSite {
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http://…/myplugin2.zip";
sha256 = "123…";
};
});
];
};
}
</screen>
</para>
</section>
<section xml:id="sec-elm">
<title>Elm</title>
<para>
To start a development environment do <command>nix-shell -p elmPackages.elm elmPackages.elm-format</command>
</para>
<para>
To update Elm compiler, see
<filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
</para>
<para>
To package Elm applications,
<link xlink:href="https://github.com/hercules-ci/elm2nix#elm2nix">read about
elm2nix</link>.
</para>
</section>
<section xml:id="sec-kakoune">
<title>Kakoune</title>
<para>
Kakoune can be built to autoload plugins:
<programlisting>(kakoune.override {
configure = {
plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
};
})</programlisting>
</para>
</section>
<section xml:id="sec-shell-helpers">
<title>Interactive shell helpers</title>
<para>
Some packages provide the shell integration to be more useful. But unlike
other systems, nix doesn't have a standard share directory location. This is
why a bunch <command>PACKAGE-share</command> scripts are shipped that print
the location of the corresponding shared folder. Current list of such
packages is as following:
<itemizedlist>
<listitem>
<para>
<literal>autojump</literal>: <command>autojump-share</command>
</para>
</listitem>
<listitem>
<para>
<literal>fzf</literal>: <command>fzf-share</command>
</para>
</listitem>
</itemizedlist>
E.g. <literal>autojump</literal> can then used in the .bashrc like this:
<screen>
source "$(autojump-share)/autojump.bash"
</screen>
</para>
</section>
<section xml:id="sec-weechat">
<title>Weechat</title>
<para>
Weechat can be configured to include your choice of plugins, reducing its
closure size from the default configuration which includes all available
plugins. To make use of this functionality, install an expression that
overrides its configuration such as
<programlisting>weechat.override {configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [ python perl ];
}
}</programlisting>
If the <literal>configure</literal> function returns an attrset without the
<literal>plugins</literal> attribute, <literal>availablePlugins</literal>
will be used automatically.
</para>
<para>
The plugins currently available are <literal>python</literal>,
<literal>perl</literal>, <literal>ruby</literal>, <literal>guile</literal>,
<literal>tcl</literal> and <literal>lua</literal>.
</para>
<para>
The python and perl plugins allows the addition of extra libraries. For
instance, the <literal>inotify.py</literal> script in weechat-scripts
requires D-Bus or libnotify, and the <literal>fish.py</literal> script
requires pycrypto. To use these scripts, use the plugin's
<literal>withPackages</literal> attribute:
<programlisting>weechat.override { configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [
(python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
];
};
}
</programlisting>
</para>
<para>
In order to also keep all default plugins installed, it is possible to use
the following method:
<programlisting>weechat.override { configure = { availablePlugins, ... }: {
plugins = builtins.attrValues (availablePlugins // {
python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
});
}; }
</programlisting>
</para>
<para>
WeeChat allows to set defaults on startup using the
<literal>--run-command</literal>. The <literal>configure</literal> method
can be used to pass commands to the program:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add freenode chat.freenode.org
'';
};
}</programlisting>
Further values can be added to the list of commands when running
<literal>weechat --run-command "your-commands"</literal>.
</para>
<para>
Additionally it's possible to specify scripts to be loaded when starting
<literal>weechat</literal>. These will be loaded before the commands from
<literal>init</literal>:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
scripts = with pkgs.weechatScripts; [
weechat-xmpp weechat-matrix-bridge wee-slack
];
init = ''
/set plugins.var.python.jabber.key "val"
'':
};
}</programlisting>
</para>
<para>
In <literal>nixpkgs</literal> there's a subpackage which contains
derivations for WeeChat scripts. Such derivations expect a
<literal>passthru.scripts</literal> attribute which contains a list of all
scripts inside the store path. Furthermore all scripts have to live in
<literal>$out/share</literal>. An exemplary derivation looks like this:
<programlisting>{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "exemplary-weechat-script";
src = fetchurl {
url = "https://scripts.tld/your-scripts.tar.gz";
sha256 = "...";
};
passthru.scripts = [ "foo.py" "bar.lua" ];
installPhase = ''
mkdir $out/share
cp foo.py $out/share
cp bar.lua $out/share
'';
}</programlisting>
</para>
</section>
<section xml:id="sec-ibus-typing-booster">
<title>ibus-engines.typing-booster</title>
<para>
This package is an ibus-based completion method to speed up typing.
</para>
<section xml:id="sec-ibus-typing-booster-activate">
<title>Activating the engine</title>
<para>
IBus needs to be configured accordingly to activate
<literal>typing-booster</literal>. The configuration depends on the desktop
manager in use. For detailed instructions, please refer to the
<link xlink:href="https://mike-fabian.github.io/ibus-typing-booster/documentation.html">upstream
docs</link>.
</para>
<para>
On NixOS you need to explicitly enable <literal>ibus</literal> with given
engines before customizing your desktop to use
<literal>typing-booster</literal>. This can be achieved using the
<literal>ibus</literal> module:
<programlisting>{ pkgs, ... }: {
i18n.inputMethod = {
enabled = "ibus";
ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
};
}</programlisting>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-customize-hunspell">
<title>Using custom hunspell dictionaries</title>
<para>
The IBus engine is based on <literal>hunspell</literal> to support
completion in many languages. By default the dictionaries
<literal>de-de</literal>, <literal>en-us</literal>, <literal>fr-moderne</literal>
<literal>es-es</literal>, <literal>it-it</literal>,
<literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add
another dictionary, the package can be overridden like this:
<programlisting>ibus-engines.typing-booster.override {
langs = [ "de-at" "en-gb" ];
}</programlisting>
</para>
<para>
<emphasis>Note: each language passed to <literal>langs</literal> must be an
attribute name in <literal>pkgs.hunspellDicts</literal>.</emphasis>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-emoji-picker">
<title>Built-in emoji picker</title>
<para>
The <literal>ibus-engines.typing-booster</literal> package contains a
program named <literal>emoji-picker</literal>. To display all emojis
correctly, a special font such as <literal>noto-fonts-emoji</literal> is
needed:
</para>
<para>
On NixOS it can be installed using the following expression:
<programlisting>{ pkgs, ... }: {
fonts.fonts = with pkgs; [ noto-fonts-emoji ];
}</programlisting>
</para>
</section>
</section>
<section xml:id="sec-nginx">
<title>Nginx</title>
<para>
<link xlink:href="https://nginx.org/">Nginx</link> is a
reverse proxy and lightweight webserver.
</para>
<section xml:id="sec-nginx-etag">
<title>ETags on static files served from the Nix store</title>
<para>
HTTP has a couple different mechanisms for caching to prevent
clients from having to download the same content repeatedly
if a resource has not changed since the last time it was requested.
When nginx is used as a server for static files, it implements
the caching mechanism based on the
<link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><literal>Last-Modified</literal></link>
response header automatically; unfortunately, it works by using
filesystem timestamps to determine the value of the
<literal>Last-Modified</literal> header. This doesn't give the
desired behavior when the file is in the Nix store, because all
file timestamps are set to 0 (for reasons related to build
reproducibility).
</para>
<para>
Fortunately, HTTP supports an alternative (and more effective)
caching mechanism: the
<link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag"><literal>ETag</literal></link>
response header. The value of the <literal>ETag</literal> header
specifies some identifier for the particular content that the
server is sending (e.g. a hash). When a client makes a second
request for the same resource, it sends that value back in an
<literal>If-None-Match</literal> header. If the ETag value is
unchanged, then the server does not need to resend the content.
</para>
<para>
As of NixOS 19.09, the nginx package in Nixpkgs is patched such
that when nginx serves a file out of <filename>/nix/store</filename>,
the hash in the store path is used as the <literal>ETag</literal>
header in the HTTP response, thus providing proper caching functionality.
This happens automatically; you do not need to do modify any
configuration to get this behavior.
</para>
</section>
</section>
</chapter>

View File

@ -1,482 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="package-specific-user-notes">
<title>Package-specific usage notes</title>
<para>
These chapters includes some notes
that apply to specific packages and should
answer some of the frequently asked questions
related to Nixpkgs use.
Some useful information related to package use
can be found in <link linkend="chap-package-notes">package-specific development notes</link>.
</para>
<section xml:id="opengl">
<title>OpenGL</title>
<para>
Packages that use OpenGL have NixOS desktop as their primary target. The
current solution for loading the GPU-specific drivers is based on
<literal>libglvnd</literal> and looks for the driver implementation in
<literal>LD_LIBRARY_PATH</literal>. If you are using a non-NixOS
GNU/Linux/X11 desktop with free software video drivers, consider launching
OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of
<literal>libglvnd</literal> and <literal>mesa_drivers</literal> in
<literal>LD_LIBRARY_PATH</literal>. For proprietary video drivers you might
have luck with also adding the corresponding video driver package.
</para>
</section>
<section xml:id="locales">
<title>Locales</title>
<para>
To allow simultaneous use of packages linked against different versions of
<literal>glibc</literal> with different locale archive formats Nixpkgs
patches <literal>glibc</literal> to rely on
<literal>LOCALE_ARCHIVE</literal> environment variable.
</para>
<para>
On non-NixOS distributions this variable is obviously not set. This can
cause regressions in language support or even crashes in some
Nixpkgs-provided programs. The simplest way to mitigate this problem is
exporting the <literal>LOCALE_ARCHIVE</literal> variable pointing to
<literal>${glibcLocales}/lib/locale/locale-archive</literal>. The drawback
(and the reason this is not the default) is the relatively large (a hundred
MiB) size of the full set of locales. It is possible to build a custom set
of locales by overriding parameters <literal>allLocales</literal> and
<literal>locales</literal> of the package.
</para>
</section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install that
packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages. It will
not <literal>configure</literal> them for us. To do this, we need to
provide a configuration file. Luckily, it is possible to do this from
within Nix! By modifying the above example, we can make Emacs load a custom
config file. The key is to create a package that provide a
<filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load this
file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to
the user's presonal config. You can always disable it by passing
<command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this
package set has some priorities imposed on packages (with the lowest
priority assigned to Melpa Unstable, and the highest for packages manually
defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all the
required dependencies manually - but it's tedious and there is always a
possibility that an unwanted dependency will sneak in through some other
package. To completely override such a package you can use
<varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>
<section xml:id="dlib">
<title>DLib</title>
<para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which
provides several machine learning algorithms.
</para>
<section xml:id="compiling-without-avx-support">
<title>Compiling without AVX support</title>
<para>
Especially older CPUs don't support
<link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link>
(<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to
optimize their algorithms.
</para>
<para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur.
In those cases AVX support needs to be disabled:
<programlisting>self: super: {
dlib = super.dlib.override { avxSupport = false; };
}</programlisting>
</para>
</section>
</section>
<section xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and
developers) of Nixpkgs want to limit and tightly control their exposure to
unfree software. At the same time, many users need (or want)
to run some specific
pieces of proprietary software. Nixpkgs includes some expressions for unfree
software packages. By default unfree software cannot be installed and
doesnt show up in searches. To allow installing unfree software in a
single Nix invocation one can export
<literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users
can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining
<literal>allowUnfreePredicate</literal> function in config; it takes the
<literal>mkDerivation</literal> parameter attrset and returns
<literal>true</literal> for unfree packages that should be allowed.
</para>
</section>
<section xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as
an i686 package (the amd64 package only has documentation). When unpacked,
it has a script called <filename>steam</filename> that in Ubuntu (their
target distro) would go to <filename>/usr/bin </filename>. When run for the
first time, this script copies some files to the user's home, which include
another script that is the ultimate responsible for launching the steam
binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point
there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as
it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible
chroot environment, as documented
<link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>.
This allows us to have binaries in the expected paths without disrupting
the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration.
To use the Steam controller or other Steam supported controllers such as
the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS
17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that
conflics with the one dynamically loaded by radeonsi_dri.so. If you
get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this
pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message
like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other
linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>
<section xml:id="sec-citrix">
<title>Citrix Receiver &amp; Citrix Workspace App</title>
<para>
<note>
<para>
Please note that the <literal>citrix_receiver</literal> package has been deprecated since its
development was <link xlink:href="https://docs.citrix.com/en-us/citrix-workspace-app.html">discontinued by upstream</link>
and will be replaced by <link xlink:href="https://www.citrix.com/products/workspace-app/">the citrix workspace app</link>.
</para>
</note>
<link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> and
<link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link>
are a remote desktop viewers which provide access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link>
installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the license
agreements of the vendor for
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">Citrix Receiver</link>
or <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link>
need to be accepted first.
Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>.
With the archive available
in the store the package can be built and installed with Nix.
</para>
<warning>
<title>Caution with <command>nix-shell</command> installs</title>
<para>
It's recommended to install <literal>Citrix Receiver</literal>
and/or <literal>Citrix Workspace</literal> using
<literal>nix-env -i</literal> or globally to
ensure that the <literal>.desktop</literal> files are installed properly
into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to
open <literal>.ica</literal> files automatically from the browser to start
a Citrix connection.
</para>
</warning>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> and <literal>Citrix Workspace App</literal>
in <literal>nixpkgs</literal> trust several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the
Mozilla database</link> by default. However several companies using Citrix
might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In
order to work around this issue the package provides a simple mechanism to
add custom certificates without rebuilding the entire package using
<literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_workspace.override { # the same applies for `citrix_receiver` if used.
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
</chapter>

View File

@ -1,105 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-platform-nodes">
<title>Platform Notes</title>
<section xml:id="sec-darwin">
<title>Darwin (macOS)</title>
<para>
Some common issues when packaging software for Darwin:
</para>
<itemizedlist>
<listitem>
<para>
The Darwin <literal>stdenv</literal> uses clang instead of gcc. When
referring to the compiler <varname>$CC</varname> or <command>cc</command>
will work in both cases. Some builds hardcode gcc/g++ in their build
scripts, that can usually be fixed with using something like
<literal>makeFlags = [ "CC=cc" ];</literal> or by patching the build
scripts.
</para>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
buildPhase = ''
$CC -o hello hello.c
'';
}
</programlisting>
</listitem>
<listitem>
<para>
On Darwin, libraries are linked using absolute paths, libraries are
resolved by their <literal>install_name</literal> at link time. Sometimes
packages won't set this correctly causing the library lookups to fail at
runtime. This can be fixed by adding extra linker flags or by running
<command>install_name_tool -id</command> during the
<function>fixupPhase</function>.
</para>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
makeFlags = stdenv.lib.optional stdenv.isDarwin "LDFLAGS=-Wl,-install_name,$(out)/lib/libfoo.dylib";
}
</programlisting>
</listitem>
<listitem>
<para>
Even if the libraries are linked using absolute paths and resolved via
their <literal>install_name</literal> correctly, tests can sometimes fail
to run binaries. This happens because the <varname>checkPhase</varname>
runs before the libraries are installed.
</para>
<para>
This can usually be solved by running the tests after the
<varname>installPhase</varname> or alternatively by using
<varname>DYLD_LIBRARY_PATH</varname>. More information about this variable
can be found in the <citerefentry>
<refentrytitle>dyld</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> manpage.
</para>
<programlisting>
dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib
Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq
Reason: image not found
./tests/jqtest: line 5: 75779 Abort trap: 6
</programlisting>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
doInstallCheck = true;
installCheckTarget = "check";
}
</programlisting>
</listitem>
<listitem>
<para>
Some packages assume xcode is available and use <command>xcrun</command>
to resolve build tools like <command>clang</command>, etc. This causes
errors like <code>xcode-select: error: no developer tools were found at
'/Applications/Xcode.app'</code> while the build doesn't actually depend
on xcode.
</para>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
prePatch = ''
substituteInPlace Makefile \
--replace '/usr/bin/xcrun clang' clang
'';
}
</programlisting>
<para>
The package <literal>xcbuild</literal> can be used to build projects that
really depend on Xcode. However, this replacement is not 100% compatible
with Xcode and can occasionally cause issues.
</para>
</listitem>
</itemizedlist>
</section>
</chapter>

52
doc/preface.chapter.md Normal file
View File

@ -0,0 +1,52 @@
---
title: Preface
author: Frederik Rietdijk
date: 2015-11-25
---
# Preface
The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the
[Nix package manager](https://nixos.org/nix/), released under a
[permissive MIT/X11 license](https://github.com/NixOS/nixpkgs/blob/master/COPYING).
Packages are available for several platforms, and can be used with the Nix
package manager on most GNU/Linux distributions as well as [NixOS](https://nixos.org/nixos).
This manual primarily describes how to write packages for the Nix Packages collection
(Nixpkgs). Thus its mainly for packagers and developers who want to add packages to
Nixpkgs. If you like to learn more about the Nix package manager and the Nix
expression language, then you are kindly referred to the [Nix manual](https://nixos.org/nix/manual/).
The NixOS distribution is documented in the [NixOS manual](https://nixos.org/nixos/manual/).
## Overview of Nixpkgs
Nix expressions describe how to build packages from source and are collected in
the [nixpkgs repository](https://github.com/NixOS/nixpkgs). Also included in the
collection are Nix expressions for
[NixOS modules](https://nixos.org/nixos/manual/index.html#sec-writing-modules).
With these expressions the Nix package manager can build binary packages.
Packages, including the Nix packages collection, are distributed through
[channels](https://nixos.org/nix/manual/#sec-channels). The collection is
distributed for users of Nix on non-NixOS distributions through the channel
`nixpkgs`. Users of NixOS generally use one of the `nixos-*` channels, e.g.
`nixos-19.09`, which includes all packages and modules for the stable NixOS
19.09. Stable NixOS releases are generally only given
security updates. More up to date packages and modules are available via the
`nixos-unstable` channel.
Both `nixos-unstable` and `nixpkgs` follow the `master` branch of the Nixpkgs
repository, although both do lag the `master` branch by generally
[a couple of days](https://howoldis.herokuapp.com/). Updates to a channel are
distributed as soon as all tests for that channel pass, e.g.
[this table](https://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
shows the status of tests for the `nixpkgs` channel.
The tests are conducted by a cluster called [Hydra](http://nixos.org/hydra/),
which also builds binary packages from the Nix expressions in Nixpkgs for
`x86_64-linux`, `i686-linux` and `x86_64-darwin`.
The binaries are made available via a [binary cache](https://cache.nixos.org).
The current Nix expressions of the channels are available in the
[`nixpkgs`](https://github.com/NixOS/nixpkgs) repository in branches
that correspond to the channel names (e.g. `nixos-19.09-small`).

Some files were not shown because too many files have changed in this diff Show More