Merge master into staging-next

This commit is contained in:
github-actions[bot] 2024-02-08 18:01:04 +00:00 committed by GitHub
commit 13d222c591
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
66 changed files with 1027 additions and 506 deletions

View File

@ -1088,238 +1088,482 @@ If you don't specify a `name` attribute, you'll encounter an evaluation error an
## Environment Helpers {#ssec-pkgs-dockerTools-helpers}
Some packages expect certain files to be available globally.
When building an image from scratch (i.e. without `fromImage`), these files are missing.
`pkgs.dockerTools` provides some helpers to set up an environment with the necessary files.
You can include them in `copyToRoot` like this:
When building Docker images with Nix, you might also want to add certain files that are expected to be available globally by the software you're packaging.
Simple examples are the `env` utility in `/usr/bin/env`, or trusted root TLS/SSL certificates.
Such files will most likely not be included if you're building a Docker image from scratch with Nix, and they might also not be included if you're starting from a Docker image that doesn't include them.
The helpers in this section are packages that provide some of these commonly-needed global files.
```nix
buildImage {
name = "environment-example";
copyToRoot = with pkgs.dockerTools; [
usrBinEnv
binSh
caCertificates
fakeNss
];
}
```
Most of these helpers are packages, which means you have to add them to the list of contents to be included in the image (this changes depending on the function you're using to build the image).
[](#ex-dockerTools-helpers-buildImage) and [](#ex-dockerTools-helpers-buildLayeredImage) show how to include these packages on `dockerTools` functions that build an image.
For more details on how that works, see the documentation for the function you're using.
### usrBinEnv {#sssec-pkgs-dockerTools-helpers-usrBinEnv}
This provides the `env` utility at `/usr/bin/env`.
This is currently implemented by linking to the `env` binary from the `coreutils` package, but is considered an implementation detail that could change in the future.
### binSh {#sssec-pkgs-dockerTools-helpers-binSh}
This provides `bashInteractive` at `/bin/sh`.
This provides a `/bin/sh` link to the `bash` binary from the `bashInteractive` package.
Because of this, it supports cases such as running a command interactively in a container (for example by running `docker run -it <image_name>`).
### caCertificates {#sssec-pkgs-dockerTools-helpers-caCertificates}
This sets up `/etc/ssl/certs/ca-certificates.crt`.
This adds trusted root TLS/SSL certificates from the `cacert` package in multiple locations in an attempt to be compatible with binaries built for multiple Linux distributions.
The locations currently used are:
- `/etc/ssl/certs/ca-bundle.crt`
- `/etc/ssl/certs/ca-certificates.crt`
- `/etc/pki/tls/certs/ca-bundle.crt`
[]{#ssec-pkgs-dockerTools-fakeNss}
### fakeNss {#sssec-pkgs-dockerTools-helpers-fakeNss}
Provides `/etc/passwd` and `/etc/group` that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
This is a re-export of the `fakeNss` package from Nixpkgs.
See [](#sec-fakeNss).
### shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
This is a string containing a script that sets up files needed for [`shadow`](https://github.com/shadow-maint/shadow) to work (using the `shadow` package from Nixpkgs), and alters `PATH` to make all its utilities available in the same script.
It is intended to be used with other dockerTools functions in attributes that expect scripts.
After the script in `shadowSetup` runs, you'll then be able to add more commands that make use of the utilities in `shadow`, such as adding any extra users and/or groups.
See [](#ex-dockerTools-shadowSetup-buildImage) and [](#ex-dockerTools-shadowSetup-buildLayeredImage) to better understand how to use it.
`shadowSetup` achieves a result similar to [`fakeNss`](#sssec-pkgs-dockerTools-helpers-fakeNss), but only sets up a `root` user with different values for the home directory and the shell to use, in addition to setting up files for [PAM](https://en.wikipedia.org/wiki/Linux_PAM) and a {manpage}`login.defs(5)` file.
:::{.caution}
Using both `fakeNss` and `shadowSetup` at the same time will either cause your build to break or produce unexpected results.
Use either `fakeNss` or `shadowSetup` depending on your use case, but avoid using both.
:::
:::{.note}
When used with [`buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage) or [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage), you will have to set the `enableFakechroot` attribute to `true`, or else the script in `shadowSetup` won't run properly.
See [](#ex-dockerTools-shadowSetup-buildLayeredImage).
:::
### Examples {#ssec-pkgs-dockerTools-helpers-examples}
:::{.example #ex-dockerTools-helpers-buildImage}
# Using `dockerTools`'s environment helpers with `buildImage`
This example adds the [`binSh`](#sssec-pkgs-dockerTools-helpers-binSh) helper to a basic Docker image built with [`dockerTools.buildImage`](#ssec-pkgs-dockerTools-buildImage).
This helper makes it possible to enter a shell inside the container.
This is the `buildImage` equivalent of [](#ex-dockerTools-helpers-buildLayeredImage).
```nix
buildImage {
name = "shadow-basic";
{ dockerTools, hello }:
dockerTools.buildImage {
name = "env-helpers";
tag = "latest";
runAsRoot = ''
#!${pkgs.runtimeShell}
${pkgs.dockerTools.shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
copyToRoot = [
hello
dockerTools.binSh
];
```
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
After building the image and loading it in Docker, we can create a container based on it and enter a shell inside the container.
This is made possible by `binSh`.
When using `buildLayeredImage`, you can put this in `fakeRootCommands` if you `enableFakechroot`:
```nix
buildLayeredImage {
name = "shadow-layered";
fakeRootCommands = ''
${pkgs.dockerTools.shadowSetup}
'';
enableFakechroot = true;
}
```shell
$ nix-build
(some output removed for clarity)
/nix/store/2p0i3i04cgjlk71hsn7ll4kxaxxiv4qg-docker-image-env-helpers.tar.gz
$ docker load -i /nix/store/2p0i3i04cgjlk71hsn7ll4kxaxxiv4qg-docker-image-env-helpers.tar.gz
(output removed for clarity)
$ docker run --rm -it env-helpers:latest /bin/sh
sh-5.2# help
GNU bash, version 5.2.21(1)-release (x86_64-pc-linux-gnu)
(rest of output removed for clarity)
```
:::
## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
:::{.example #ex-dockerTools-helpers-buildLayeredImage}
# Using `dockerTools`'s environment helpers with `buildLayeredImage`
If your primary goal is providing a basic skeleton for user lookups to work,
and/or a lesser privileged user, adding `pkgs.fakeNss` to
the container image root might be the better choice than a custom script
running `useradd` and friends.
It provides a `/etc/passwd` and `/etc/group`, containing `root` and `nobody`
users and groups.
It also provides a `/etc/nsswitch.conf`, configuring NSS host resolution to
first check `/etc/hosts`, before checking DNS, as the default in the absence of
a config file (`dns [!UNAVAIL=return] files`) is quite unexpected.
You can pair it with `binSh`, which provides `bin/sh` as a symlink
to `bashInteractive` (as `/bin/sh` is configured as a shell).
This example adds the [`binSh`](#sssec-pkgs-dockerTools-helpers-binSh) helper to a basic Docker image built with [`dockerTools.buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage).
This helper makes it possible to enter a shell inside the container.
This is the `buildLayeredImage` equivalent of [](#ex-dockerTools-helpers-buildImage).
```nix
buildImage {
name = "shadow-basic";
{ dockerTools, hello }:
dockerTools.buildLayeredImage {
name = "env-helpers";
tag = "latest";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ binSh pkgs.fakeNss ];
pathsToLink = [ "/bin" "/etc" "/var" ];
contents = [
hello
dockerTools.binSh
];
config = {
Cmd = [ "/bin/hello" ];
};
}
```
## buildNixShellImage {#ssec-pkgs-dockerTools-buildNixShellImage}
After building the image and loading it in Docker, we can create a container based on it and enter a shell inside the container.
This is made possible by `binSh`.
Create a Docker image that sets up an environment similar to that of running `nix-shell` on a derivation.
When run in Docker, this environment somewhat resembles the Nix sandbox typically used by `nix-build`, with a major difference being that access to the internet is allowed.
It additionally also behaves like an interactive `nix-shell`, running things like `shellHook` and setting an interactive prompt.
If the derivation is fully buildable (i.e. `nix-build` can be used on it), running `buildDerivation` inside such a Docker image will build the derivation, with all its outputs being available in the correct `/nix/store` paths, pointed to by the respective environment variables like `$out`, etc.
::: {.warning}
The behavior doesn't match `nix-shell` or `nix-build` exactly and this function is known not to work correctly for e.g. fixed-output derivations, content-addressed derivations, impure derivations and other special types of derivations.
```shell
$ nix-build
(some output removed for clarity)
/nix/store/rpf47f4z5b9qr4db4ach9yr4b85hjhxq-env-helpers.tar.gz
$ docker load -i /nix/store/rpf47f4z5b9qr4db4ach9yr4b85hjhxq-env-helpers.tar.gz
(output removed for clarity)
$ docker run --rm -it env-helpers:latest /bin/sh
sh-5.2# help
GNU bash, version 5.2.21(1)-release (x86_64-pc-linux-gnu)
(rest of output removed for clarity)
```
:::
### Arguments {#ssec-pkgs-dockerTools-buildNixShellImage-arguments}
:::{.example #ex-dockerTools-shadowSetup-buildImage}
# Using `dockerTools.shadowSetup` with `dockerTools.buildImage`
`drv`
: The derivation on which to base the Docker image.
Adding packages to the Docker image is possible by e.g. extending the list of `nativeBuildInputs` of this derivation like
```nix
buildNixShellImage {
drv = someDrv.overrideAttrs (old: {
nativeBuildInputs = old.nativeBuildInputs or [] ++ [
somethingExtra
];
});
# ...
}
```
Similarly, you can extend the image initialization script by extending `shellHook`
`name` _optional_
: The name of the resulting image.
*Default:* `drv.name + "-env"`
`tag` _optional_
: Tag of the generated image.
*Default:* the resulting image derivation output path's hash
`uid`/`gid` _optional_
: The user/group ID to run the container as. This is like a `nixbld` build user.
*Default:* 1000/1000
`homeDirectory` _optional_
: The home directory of the user the container is running as
*Default:* `/build`
`shell` _optional_
: The path to the `bash` binary to use as the shell. This shell is started when running the image.
*Default:* `pkgs.bashInteractive + "/bin/bash"`
`command` _optional_
: Run this command in the environment of the derivation, in an interactive shell. See the `--command` option in the [`nix-shell` documentation](https://nixos.org/manual/nix/stable/command-ref/nix-shell.html?highlight=nix-shell#options).
*Default:* (none)
`run` _optional_
: Same as `command`, but runs the command in a non-interactive shell instead. See the `--run` option in the [`nix-shell` documentation](https://nixos.org/manual/nix/stable/command-ref/nix-shell.html?highlight=nix-shell#options).
*Default:* (none)
### Example {#ssec-pkgs-dockerTools-buildNixShellImage-example}
The following shows how to build the `pkgs.hello` package inside a Docker container built with `buildNixShellImage`.
This is an example that shows how to use `shadowSetup` with `dockerTools.buildImage`.
Note that the extra script in `runAsRoot` uses `groupadd` and `useradd`, which are binaries provided by the `shadow` package.
These binaries are added to the `PATH` by the `shadowSetup` script, but only for the duration of `runAsRoot`.
```nix
with import <nixpkgs> {};
{ dockerTools, hello }:
dockerTools.buildImage {
name = "shadow-basic";
tag = "latest";
copyToRoot = [ hello ];
runAsRoot = ''
${dockerTools.shadowSetup}
groupadd -r hello
useradd -r -g hello hello
mkdir /data
chown hello:hello /data
'';
config = {
Cmd = [ "/bin/hello" ];
WorkingDir = "/data";
};
}
```
:::
:::{.example #ex-dockerTools-shadowSetup-buildLayeredImage}
# Using `dockerTools.shadowSetup` with `dockerTools.buildLayeredImage`
It accomplishes the same thing as [](#ex-dockerTools-shadowSetup-buildImage), but using `buildLayeredImage` instead.
Note that the extra script in `fakeRootCommands` uses `groupadd` and `useradd`, which are binaries provided by the `shadow` package.
These binaries are added to the `PATH` by the `shadowSetup` script, but only for the duration of `fakeRootCommands`.
```nix
{ dockerTools, hello }:
dockerTools.buildLayeredImage {
name = "shadow-basic";
tag = "latest";
contents = [ hello ];
fakeRootCommands = ''
${dockerTools.shadowSetup}
groupadd -r hello
useradd -r -g hello hello
mkdir /data
chown hello:hello /data
'';
enableFakechroot = true;
config = {
Cmd = [ "/bin/hello" ];
WorkingDir = "/data";
};
}
```
:::
[]{#ssec-pkgs-dockerTools-buildNixShellImage-arguments}
## buildNixShellImage {#ssec-pkgs-dockerTools-buildNixShellImage}
`buildNixShellImage` uses [`streamNixShellImage`](#ssec-pkgs-dockerTools-streamNixShellImage) underneath to build a compressed Docker-compatible repository tarball of an image that sets up an environment similar to that of running `nix-shell` on a derivation.
Basically, `buildNixShellImage` runs the script created by `streamNixShellImage` to save the compressed image in the Nix store.
`buildNixShellImage` supports the same options as `streamNixShellImage`, see [`streamNixShellImage`](#ssec-pkgs-dockerTools-streamNixShellImage) for details.
[]{#ssec-pkgs-dockerTools-buildNixShellImage-example}
### Examples {#ssec-pkgs-dockerTools-buildNixShellImage-examples}
:::{.example #ex-dockerTools-buildNixShellImage-hello}
# Building a Docker image with `buildNixShellImage` with the build environment for the `hello` package
This example shows how to build the `hello` package inside a Docker container built with `buildNixShellImage`.
The Docker image generated will have a name like `hello-<version>-env` and tag `latest`.
This example is the `buildNixShellImage` equivalent of [](#ex-dockerTools-streamNixShellImage-hello).
```nix
{ dockerTools, hello }:
dockerTools.buildNixShellImage {
drv = hello;
tag = "latest";
}
```
Build the derivation:
The result of building this package is a `.tar.gz` file that can be loaded into Docker:
```console
nix-build hello.nix
```shell
$ nix-build
(some output removed for clarity)
/nix/store/pkj1sgzaz31wl0pbvbg3yp5b3kxndqms-hello-2.12.1-env.tar.gz
$ docker load -i /nix/store/pkj1sgzaz31wl0pbvbg3yp5b3kxndqms-hello-2.12.1-env.tar.gz
(some output removed for clarity)
Loaded image: hello-2.12.1-env:latest
```
these 8 derivations will be built:
/nix/store/xmw3a5ln29rdalavcxk1w3m4zb2n7kk6-nix-shell-rc.drv
...
Creating layer 56 from paths: ['/nix/store/crpnj8ssz0va2q0p5ibv9i6k6n52gcya-stdenv-linux']
Creating layer 57 with customisation...
Adding manifests...
Done.
/nix/store/cpyn1lc897ghx0rhr2xy49jvyn52bazv-hello-2.12-env.tar.gz
After starting an interactive container, the derivation can be built by running `buildDerivation`, and the output can be executed as expected:
Load the image:
```shell
$ docker run -it hello-2.12.1-env:latest
[nix-shell:~]$ buildDerivation
Running phase: unpackPhase
unpacking source archive /nix/store/pa10z4ngm0g83kx9mssrqzz30s84vq7k-hello-2.12.1.tar.gz
source root is hello-2.12.1
(some output removed for clarity)
Running phase: fixupPhase
shrinking RPATHs of ELF executables and libraries in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1
shrinking /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/bin/hello
checking for references to /build/ in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1...
gzipping man pages under /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/share/man/
patching script interpreter paths in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1
stripping (with command strip and flags -S -p) in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/bin
```console
docker load -i result
[nix-shell:~]$ $out/bin/hello
Hello, world!
```
:::
## streamNixShellImage {#ssec-pkgs-dockerTools-streamNixShellImage}
`streamNixShellImage` builds a **script** which, when run, will stream to stdout a Docker-compatible repository tarball of an image that sets up an environment similar to that of running `nix-shell` on a derivation.
This means that `streamNixShellImage` does not output an image into the Nix store, but only a script that builds the image, saving on IO and disk/cache space, particularly with large images.
See [](#ex-dockerTools-streamNixShellImage-hello) to understand how to load in Docker the image generated by this script.
The environment set up by `streamNixShellImage` somewhat resembles the Nix sandbox typically used by `nix-build`, with a major difference being that access to the internet is allowed.
It also behaves like an interactive `nix-shell`, running things like `shellHook` (see [](#ex-dockerTools-streamNixShellImage-addingShellHook)) and setting an interactive prompt.
If the derivation is buildable (i.e. `nix-build` can be used on it), running `buildDerivation` in the container will build the derivation, with all its outputs being available in the correct `/nix/store` paths, pointed to by the respective environment variables (e.g. `$out`).
::: {.caution}
The environment in the image doesn't match `nix-shell` or `nix-build` exactly, and this function is known not to work correctly for fixed-output derivations, content-addressed derivations, impure derivations and other special types of derivations.
:::
### Inputs {#ssec-pkgs-dockerTools-streamNixShellImage-inputs}
`streamNixShellImage` expects one argument with the following attributes:
`drv` (Attribute Set)
: The derivation for which the environment in the image will be set up.
Adding packages to the Docker image is possible by extending the list of `nativeBuildInputs` of this derivation.
See [](#ex-dockerTools-streamNixShellImage-extendingBuildInputs) for how to do that.
Similarly, you can extend the image initialization script by extending `shellHook`.
[](#ex-dockerTools-streamNixShellImage-addingShellHook) shows how to do that.
`name` (String; _optional_)
: The name of the generated image.
_Default value:_ the value of `drv.name + "-env"`.
`tag` (String or Null; _optional_)
: Tag of the generated image.
If `null`, the hash of the nix derivation that builds the Docker image will be used as the tag.
_Default value:_ `null`.
`uid` (Number; _optional_)
: The user ID to run the container as.
This can be seen as a `nixbld` build user.
_Default value:_ 1000.
`gid` (Number; _optional_)
: The group ID to run the container as.
This can be seen as a `nixbld` build group.
_Default value:_ 1000.
`homeDirectory` (String; _optional_)
: The home directory of the user the container is running as.
_Default value:_ `/build`.
`shell` (String; _optional_)
: The path to the `bash` binary to use as the shell.
This shell is started when running the image.
This can be seen as an equivalent of the `NIX_BUILD_SHELL` [environment variable](https://nixos.org/manual/nix/stable/command-ref/nix-shell.html#environment-variables) for {manpage}`nix-shell(1)`.
_Default value:_ the `bash` binary from the `bashInteractive` package.
`command` (String or Null; _optional_)
: If specified, this command will be run in the environment of the derivation in an interactive shell.
A call to `exit` will be added after the command if it is specified, so the shell will exit after it's finished running.
This can be seen as an equivalent of the `--command` option in {manpage}`nix-shell(1)`.
_Default value:_ `null`.
`run` (String or Null; _optional_)
: Similar to the `command` attribute, but runs the command in a non-interactive shell instead.
A call to `exit` will be added after the command if it is specified, so the shell will exit after it's finished running.
This can be seen as an equivalent of the `--run` option in {manpage}`nix-shell(1)`.
_Default value:_ `null`.
### Examples {#ssec-pkgs-dockerTools-streamNixShellImage-examples}
:::{.example #ex-dockerTools-streamNixShellImage-hello}
# Building a Docker image with `streamNixShellImage` with the build environment for the `hello` package
This example shows how to build the `hello` package inside a Docker container built with `streamNixShellImage`.
The Docker image generated will have a name like `hello-<version>-env` and tag `latest`.
This example is the `streamNixShellImage` equivalent of [](#ex-dockerTools-buildNixShellImage-hello).
```nix
{ dockerTools, hello }:
dockerTools.streamNixShellImage {
drv = hello;
tag = "latest";
}
```
0d9f4c4cd109: Loading layer [==================================================>] 2.56MB/2.56MB
...
ab1d897c0697: Loading layer [==================================================>] 10.24kB/10.24kB
Loaded image: hello-2.12-env:pgj9h98nal555415faa43vsydg161bdz
The result of building this package is a script.
Running this script and piping it into `docker load` gives you the same image that was built in [](#ex-dockerTools-buildNixShellImage-hello).
Run the container:
```shell
$ nix-build
(some output removed for clarity)
/nix/store/8vhznpz2frqazxnd8pgdvf38jscdypax-stream-hello-2.12.1-env
```console
docker run -it hello-2.12-env:pgj9h98nal555415faa43vsydg161bdz
$ /nix/store/8vhznpz2frqazxnd8pgdvf38jscdypax-stream-hello-2.12.1-env | docker load
(some output removed for clarity)
Loaded image: hello-2.12.1-env:latest
```
[nix-shell:/build]$
After starting an interactive container, the derivation can be built by running `buildDerivation`, and the output can be executed as expected:
In the running container, run the build:
```shell
$ docker run -it hello-2.12.1-env:latest
[nix-shell:~]$ buildDerivation
Running phase: unpackPhase
unpacking source archive /nix/store/pa10z4ngm0g83kx9mssrqzz30s84vq7k-hello-2.12.1.tar.gz
source root is hello-2.12.1
(some output removed for clarity)
Running phase: fixupPhase
shrinking RPATHs of ELF executables and libraries in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1
shrinking /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/bin/hello
checking for references to /build/ in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1...
gzipping man pages under /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/share/man/
patching script interpreter paths in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1
stripping (with command strip and flags -S -p) in /nix/store/f2vs29jibd7lwxyj35r9h87h6brgdysz-hello-2.12.1/bin
```console
buildDerivation
[nix-shell:~]$ $out/bin/hello
Hello, world!
```
:::
:::{.example #ex-dockerTools-streamNixShellImage-extendingBuildInputs}
# Adding extra packages to a Docker image built with `streamNixShellImage`
This example shows how to add extra packages to an image built with `streamNixShellImage`.
In this case, we'll add the `cowsay` package.
The Docker image generated will have a name like `hello-<version>-env` and tag `latest`.
This example uses [](#ex-dockerTools-streamNixShellImage-hello) as a starting point.
```nix
{ dockerTools, cowsay, hello }:
dockerTools.streamNixShellImage {
tag = "latest";
drv = hello.overrideAttrs (old: {
nativeBuildInputs = old.nativeBuildInputs or [] ++ [
cowsay
];
});
}
```
unpacking sources
unpacking source archive /nix/store/8nqv6kshb3vs5q5bs2k600xpj5bkavkc-hello-2.12.tar.gz
...
patching script interpreter paths in /nix/store/z5wwy5nagzy15gag42vv61c2agdpz2f2-hello-2.12
checking for references to /build/ in /nix/store/z5wwy5nagzy15gag42vv61c2agdpz2f2-hello-2.12...
The result of building this package is a script which can be run and piped into `docker load` to load the generated image.
Check the build result:
```shell
$ nix-build
(some output removed for clarity)
/nix/store/h5abh0vljgzg381lna922gqknx6yc0v7-stream-hello-2.12.1-env
```console
$out/bin/hello
$ /nix/store/h5abh0vljgzg381lna922gqknx6yc0v7-stream-hello-2.12.1-env | docker load
(some output removed for clarity)
Loaded image: hello-2.12.1-env:latest
```
Hello, world!
After starting an interactive container, we can verify the extra package is available by running `cowsay`:
```shell
$ docker run -it hello-2.12.1-env:latest
[nix-shell:~]$ cowsay "Hello, world!"
_______________
< Hello, world! >
---------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
```
:::
:::{.example #ex-dockerTools-streamNixShellImage-addingShellHook}
# Adding a `shellHook` to a Docker image built with `streamNixShellImage`
This example shows how to add a `shellHook` command to an image built with `streamNixShellImage`.
In this case, we'll simply output the string `Hello, world!`.
The Docker image generated will have a name like `hello-<version>-env` and tag `latest`.
This example uses [](#ex-dockerTools-streamNixShellImage-hello) as a starting point.
```nix
{ dockerTools, hello }:
dockerTools.streamNixShellImage {
tag = "latest";
drv = hello.overrideAttrs (old: {
shellHook = ''
${old.shellHook or ""}
echo "Hello, world!"
'';
});
}
```
The result of building this package is a script which can be run and piped into `docker load` to load the generated image.
```shell
$ nix-build
(some output removed for clarity)
/nix/store/iz4dhdvgzazl5vrgyz719iwjzjy6xlx1-stream-hello-2.12.1-env
$ /nix/store/iz4dhdvgzazl5vrgyz719iwjzjy6xlx1-stream-hello-2.12.1-env | docker load
(some output removed for clarity)
Loaded image: hello-2.12.1-env:latest
```
After starting an interactive container, we can see the result of the `shellHook`:
```shell
$ docker run -it hello-2.12.1-env:latest
Hello, world!
[nix-shell:~]$
```
:::

View File

@ -3,6 +3,7 @@
This chapter describes several special build helpers.
```{=include=} sections
special/fakenss.section.md
special/fhs-environments.section.md
special/makesetuphook.section.md
special/mkshell.section.md

View File

@ -0,0 +1,77 @@
# fakeNss {#sec-fakeNss}
Provides `/etc/passwd` and `/etc/group` files that contain `root` and `nobody`, allowing user/group lookups to work in binaries that insist on doing those.
This might be a better choice than a custom script running `useradd` and related utilities if you only need those files to exist with some entries.
`fakeNss` also provides `/etc/nsswitch.conf`, configuring NSS host resolution to first check `/etc/hosts` before checking DNS, since the default in the absence of a config file (`dns [!UNAVAIL=return] files`) is quite unexpected.
It also creates an empty directory at `/var/empty` because it uses that as the home directory for the `root` and `nobody` users.
The `/var/empty` directory can also be used as a `chroot` target to prevent file access in processes that do not need to access files, if your container runs such processes.
The user entries created by `fakeNss` use the `/bin/sh` shell, which is not provided by `fakeNss` because in most cases it won't be used.
If you need that to be available, see [`dockerTools.binSh`](#sssec-pkgs-dockerTools-helpers-binSh) or provide your own.
## Inputs {#sec-fakeNss-inputs}
`fakeNss` is made available in Nixpkgs as a package rather than a function, but it has two attributes that can be overridden and might be useful in particular cases.
For more details on how overriding works, see [](#ex-fakeNss-overriding) and [](#sec-pkg-override).
`extraPasswdLines` (List of Strings; _optional_)
: A list of lines that will be added to `/etc/passwd`.
Useful if extra users need to exist in the output of `fakeNss`.
If `extraPasswdLines` is specified, it will **not** override the `root` and `nobody` entries created by `fakeNss`.
Those entries will always exist.
Lines specified here must follow the format in {manpage}`passwd(5)`.
_Default value:_ `[]`.
`extraGroupLines` (List of Strings; _optional_)
: A list of lines that will be added to `/etc/group`.
Useful if extra groups need to exist in the output of `fakeNss`.
If `extraGroupLines` is specified, it will **not** override the `root` and `nobody` entries created by `fakeNss`.
Those entries will always exist.
Lines specified here must follow the format in {manpage}`group(5)`.
_Default value:_ `[]`.
## Examples {#sec-fakeNss-examples}
:::{.example #ex-fakeNss-dockerTools-buildImage}
# Using `fakeNss` with `dockerTools.buildImage`
This example shows how to use `fakeNss` as-is.
It is useful with functions in `dockerTools` to allow building Docker images that have the `/etc/passwd` and `/etc/group` files.
This example includes the `hello` binary in the image so it can do something besides just have the extra files.
```nix
{ dockerTools, fakeNss, hello }:
dockerTools.buildImage {
name = "image-with-passwd";
tag = "latest";
copyToRoot = [ fakeNss hello ];
config = {
Cmd = [ "/bin/hello" ];
};
}
```
:::
:::{.example #ex-fakeNss-overriding}
# Using `fakeNss` with an override to add extra lines
The following code uses `override` to add extra lines to `/etc/passwd` and `/etc/group` to create another user and group entry.
```nix
{ fakeNss }:
fakeNss.override {
extraPasswdLines = ["newuser:x:9001:9001:new user:/var/empty:/bin/sh"];
extraGroupLines = ["newuser:x:9001:"];
}
```
:::

View File

@ -2,7 +2,7 @@
This hook helps with installing manpages and shell completion files. It exposes 2 shell functions `installManPage` and `installShellCompletion` that can be used from your `postInstall` hook.
The `installManPage` function takes one or more paths to manpages to install. The manpages must have a section suffix, and may optionally be compressed (with `.gz` suffix). This function will place them into the correct directory.
The `installManPage` function takes one or more paths to manpages to install. The manpages must have a section suffix, and may optionally be compressed (with `.gz` suffix). This function will place them into the correct `share/man/man<section>/` directory, in [`outputMan`](#outputman).
The `installShellCompletion` function takes one or more paths to shell completion files. By default it will autodetect the shell type from the completion file extension, but you may also specify it by passing one of `--bash`, `--fish`, or `--zsh`. These flags apply to all paths listed after them (up until another shell flag is given). Each path may also have a custom installation name provided by providing a flag `--name NAME` before the path. If this flag is not provided, zsh completions will be renamed automatically such that `foobar.zsh` becomes `_foobar`. A root name may be provided for all paths using the flag `--cmd NAME`; this synthesizes the appropriate name depending on the shell (e.g. `--cmd foo` will synthesize the name `foo.bash` for bash and `_foo` for zsh). The path may also be a fifo or named fd (such as produced by `<(cmd)`), in which case the shell and name must be provided.

View File

@ -93,7 +93,11 @@ The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given
To package Dotnet applications, you can use `buildDotnetModule`. This has similar arguments to `stdenv.mkDerivation`, with the following additions:
* `projectFile` is used for specifying the dotnet project file, relative to the source root. These have `.sln` (entire solution) or `.csproj` (single project) file extensions. This can be a list of multiple projects as well. When omitted, will attempt to find and build the solution (`.sln`). If running into problems, make sure to set it to a file (or a list of files) with the `.csproj` extension - building applications as entire solutions is not fully supported by the .NET CLI.
* `nugetDeps` takes either a path to a `deps.nix` file, or a derivation. The `deps.nix` file can be generated using the script attached to `passthru.fetch-deps`. This file can also be generated manually using `nuget-to-nix` tool, which is available in nixpkgs. If the argument is a derivation, it will be used directly and assume it has the same output as `mkNugetDeps`.
* `nugetDeps` takes either a path to a `deps.nix` file, or a derivation. The `deps.nix` file can be generated using the script attached to `passthru.fetch-deps`. If the argument is a derivation, it will be used directly and assume it has the same output as `mkNugetDeps`.
::: {.note}
For more detail about managing the `deps.nix` file, see [Generating and updating NuGet dependencies](#generating-and-updating-nuget-dependencies)
:::
* `packNupkg` is used to pack project as a `nupkg`, and installs it to `$out/share`. If set to `true`, the derivation can be used as a dependency for another dotnet project by adding it to `projectReferences`.
* `projectReferences` can be used to resolve `ProjectReference` project items. Referenced projects can be packed with `buildDotnetModule` by setting the `packNupkg = true` attribute and passing a list of derivations to `projectReferences`. Since we are sharing referenced projects as NuGets they must be added to csproj/fsproj files as `PackageReference` as well.
For example, your project has a local dependency:
@ -156,6 +160,8 @@ in buildDotnetModule rec {
}
```
Keep in mind that you can tag the [`@NixOS/dotnet`](https://github.com/orgs/nixos/teams/dotnet) team for help and code review.
## Dotnet global tools {#dotnet-global-tools}
[.NET Global tools](https://learn.microsoft.com/en-us/dotnet/core/tools/global-tools) are a mechanism provided by the dotnet CLI to install .NET binaries from Nuget packages.
@ -212,5 +218,43 @@ buildDotnetGlobalTool {
};
}
```
## Generating and updating NuGet dependencies {#generating-and-updating-nuget-dependencies}
First, restore the packages to the `out` directory, ensure you have cloned
the upstream repository and you are inside it.
```bash
$ dotnet restore --packages out
Determining projects to restore...
Restored /home/lychee/Celeste64/Celeste64.csproj (in 1.21 sec).
```
Next, use `nuget-to-nix` tool provided in nixpkgs to generate a lockfile to `deps.nix` from
the packages inside the `out` directory.
```bash
$ nuget-to-nix out > deps.nix
```
Which `nuget-to-nix` will generate an output similar to below
```
{ fetchNuGet }: [
(fetchNuGet { pname = "FosterFramework"; version = "0.1.15-alpha"; sha256 = "0pzsdfbsfx28xfqljcwy100xhbs6wyx0z1d5qxgmv3l60di9xkll"; })
(fetchNuGet { pname = "Microsoft.AspNetCore.App.Runtime.linux-x64"; version = "8.0.1"; sha256 = "1gjz379y61ag9whi78qxx09bwkwcznkx2mzypgycibxk61g11da1"; })
(fetchNuGet { pname = "Microsoft.NET.ILLink.Tasks"; version = "8.0.1"; sha256 = "1drbgqdcvbpisjn8mqfgba1pwb6yri80qc4mfvyczqwrcsj5k2ja"; })
(fetchNuGet { pname = "Microsoft.NETCore.App.Runtime.linux-x64"; version = "8.0.1"; sha256 = "1g5b30f4l8a1zjjr3b8pk9mcqxkxqwa86362f84646xaj4iw3a4d"; })
(fetchNuGet { pname = "SharpGLTF.Core"; version = "1.0.0-alpha0031"; sha256 = "0ln78mkhbcxqvwnf944hbgg24vbsva2jpih6q3x82d3h7rl1pkh6"; })
(fetchNuGet { pname = "SharpGLTF.Runtime"; version = "1.0.0-alpha0031"; sha256 = "0lvb3asi3v0n718qf9y367km7qpkb9wci38y880nqvifpzllw0jg"; })
(fetchNuGet { pname = "Sledge.Formats"; version = "1.2.2"; sha256 = "1y0l66m9rym0p1y4ifjlmg3j9lsmhkvbh38frh40rpvf1axn2dyh"; })
(fetchNuGet { pname = "Sledge.Formats.Map"; version = "1.1.5"; sha256 = "1bww60hv9xcyxpvkzz5q3ybafdxxkw6knhv97phvpkw84pd0jil6"; })
(fetchNuGet { pname = "System.Numerics.Vectors"; version = "4.5.0"; sha256 = "1kzrj37yzawf1b19jq0253rcs8hsq1l2q8g69d7ipnhzb0h97m59"; })
]
```
Finally, you move the `deps.nix` file to the appropriate location to be used by `nugetDeps`, then you're all set!
If you ever need to update the dependencies of a package, you instead do
* `nix-build -A package.fetch-deps` to generate the update script for `package`
* Run `./result deps.nix` to regenerate the lockfile to `deps.nix`, keep in mind if a location isn't provided, it will write to a temporary path instead
* Finally, move the file where needed and look at its contents to confirm it has updated the dependencies.
When packaging a new .NET application in nixpkgs, you can tag the [`@NixOS/dotnet`](https://github.com/orgs/nixos/teams/dotnet) team for help and code review.

View File

@ -314,5 +314,9 @@
"systemd-veritysetup@.service(8)": "https://www.freedesktop.org/software/systemd/man/systemd-veritysetup@.service.html",
"systemd-volatile-root(8)": "https://www.freedesktop.org/software/systemd/man/systemd-volatile-root.html",
"systemd-xdg-autostart-generator(8)": "https://www.freedesktop.org/software/systemd/man/systemd-xdg-autostart-generator.html",
"udevadm(8)": "https://www.freedesktop.org/software/systemd/man/udevadm.html"
"udevadm(8)": "https://www.freedesktop.org/software/systemd/man/udevadm.html",
"passwd(5)": "https://man.archlinux.org/man/passwd.5",
"group(5)": "https://man.archlinux.org/man/group.5",
"login.defs(5)": "https://man.archlinux.org/man/login.defs.5",
"nix-shell(1)": "https://nixos.org/manual/nix/stable/command-ref/nix-shell.html"
}

View File

@ -16019,7 +16019,7 @@
};
RGBCube = {
name = "RGBCube";
email = "rgbsphere+nixpkgs@gmail.com";
email = "nixpkgs@rgbcu.be";
github = "RGBCube";
githubId = 78925721;
keys = [{

View File

@ -22,7 +22,8 @@ with lib;
let
workDir = if cfg.workDir == null then runtimeDir else cfg.workDir;
package = cfg.package.override { inherit (cfg) nodeRuntimes; };
# Support old github-runner versions which don't have the `nodeRuntimes` arg yet.
package = cfg.package.override (old: optionalAttrs (hasAttr "nodeRuntimes" old) { inherit (cfg) nodeRuntimes; });
in
{
description = "GitHub Actions runner";

View File

@ -2,13 +2,13 @@
stdenv.mkDerivation (finalAttrs: {
pname = "tippecanoe";
version = "2.41.3";
version = "2.42.0";
src = fetchFromGitHub {
owner = "felt";
repo = "tippecanoe";
rev = finalAttrs.version;
hash = "sha256-yHX0hQbuPFaosBR/N7TmQKOHnd2LG6kkfGUBlaSkA8E=";
hash = "sha256-+IEgjjfotu2gLnaPyV29MEpVndgaZYRaFc92jvAKcWo=";
};
buildInputs = [ sqlite zlib ];

View File

@ -8,16 +8,16 @@
buildGoModule rec {
pname = "karmor";
version = "1.0.0";
version = "1.1.0";
src = fetchFromGitHub {
owner = "kubearmor";
repo = "kubearmor-client";
rev = "v${version}";
hash = "sha256-TL/K1r76DV9CdKfVpE3Fn7N38lHqEF9Sxtthfew2l3w=";
hash = "sha256-HQJHtRi/ddKD+CNG3Ea61jz8zKcACBYCUR+qKbzADcI=";
};
vendorHash = "sha256-72gFtM+Z65VreeIamoBHXx2EsGCv8aDHmRz2aSQCU7Q=";
vendorHash = "sha256-Lzp6n66oMrzTk4oWERa8Btb3FwiASpSj8hdQmYxYges=";
nativeBuildInputs = [ installShellFiles ];

View File

@ -10,16 +10,16 @@
buildGoModule rec {
pname = "werf";
version = "1.2.287";
version = "1.2.288";
src = fetchFromGitHub {
owner = "werf";
repo = "werf";
rev = "v${version}";
hash = "sha256-+xilQ9By8cbH/CDCxAocm2OlVnvh7efqcB/3cMZhc1w=";
hash = "sha256-NKSqg9lKKwK+b1dPpeQz4gp3KcVd4nZDhZR8+AAMTRc=";
};
vendorHash = "sha256-uzIUjG3Hv7wdsbX75wHZ8Z8fy/EPgRKH74VXUhThycE=";
vendorHash = "sha256-GRSGhepnXuTS3hgFanQgEdBtB+eyA7zUJ9W2qpE02LI=";
proxyVendor = true;

View File

@ -805,6 +805,7 @@ rec {
'';
# This provides /bin/sh, pointing to bashInteractive.
# The use of bashInteractive here is intentional to support cases like `docker run -it <image_name>`, so keep these use cases in mind if making any changes to how this works.
binSh = runCommand "bin-sh" { } ''
mkdir -p $out/bin
ln -s ${bashInteractive}/bin/bash $out/bin/sh

View File

@ -13,11 +13,11 @@
stdenv.mkDerivation (finalAttrs: {
name = "dorion";
version = "4.1.1";
version = "4.1.2";
src = fetchurl {
url = "https://github.com/SpikeHD/Dorion/releases/download/v${finalAttrs.version }/Dorion_${finalAttrs.version}_amd64.deb";
hash = "sha256-H+r5+TPZ1Yyn0nE4MJGlN9WEn13nA8fkI1ZmfFor5Lk=";
hash = "sha256-hpZF83QPRcRqI0wCnIu6CsNBe8b9H0KrDyp6CDYkOfQ=";
};
unpackCmd = ''

View File

@ -1,8 +1,6 @@
{
lib,
fetchFromGitHub,
rustPlatform,
stdenv,
{ lib
, fetchFromGitHub
, rustPlatform
}:
rustPlatform.buildRustPackage rec {
pname = "parallel-disk-usage";
@ -17,14 +15,17 @@ rustPlatform.buildRustPackage rec {
cargoHash = "sha256-Jk9sNvApq4t/FoEzfjlDT2Td5sr38Jbdo6RoaOVQJK8=";
checkFlags = [
# test example is ordered wrong on some systems
# https://github.com/KSXGitHub/parallel-disk-usage/issues/251
"--skip=multiple_names"
];
meta = with lib; {
description = "Highly parallelized, blazing fast directory tree analyzer";
homepage = "https://github.com/KSXGitHub/parallel-disk-usage";
license = licenses.asl20;
maintainers = [maintainers.peret];
mainProgram = "pdu";
# broken due to unit test failure
# https://github.com/KSXGitHub/parallel-disk-usage/issues/251
broken = stdenv.isLinux && stdenv.isAarch64;
};
}

View File

@ -1,13 +1,13 @@
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "pmtiles";
version = "1.14.0";
version = "1.14.1";
src = fetchFromGitHub {
owner = "protomaps";
repo = "go-pmtiles";
rev = "v${version}";
hash = "sha256-yIH5vJTrSH1y30nHU7jrem1kbXp1fO0mhLoGMrv4IAE=";
hash = "sha256-CnREcPXNehxOMZm/cuedkDeWtloc7TGWNmmoFZhSTZE=";
};
vendorHash = "sha256-tSQjCdgEXIGlSWcIB6lLQulAiEAebgW3pXL9Z2ujgIs=";

View File

@ -17,12 +17,12 @@ let
in
stdenv.mkDerivation rec {
pname = "circt";
version = "1.64.0";
version = "1.65.0";
src = fetchFromGitHub {
owner = "llvm";
repo = "circt";
rev = "firtool-${version}";
sha256 = "sha256-tZ8IQa01hYVJZdUKPd0rMGfAScuhZPzpwP51WWXERGw=";
sha256 = "sha256-RYQAnvU+yoHGrU9zVvrD1/O80ioHEq2Cvo/MIjI6uTo=";
fetchSubmodules = true;
};

View File

@ -5,13 +5,13 @@
stdenv.mkDerivation rec {
pname = "avrdude";
version = "7.2";
version = "7.3";
src = fetchFromGitHub {
owner = "avrdudes";
repo = pname;
rev = "v${version}";
sha256 = "sha256-/JyhMBcjNklyyXZEFZGTjrTNyafXEdHEhcLz6ZQx9aU=";
sha256 = "sha256-JqW3AOMmAfcy+PQRcqviWlxA6GoMSEfzIFt1pRYY7Dw=";
};
nativeBuildInputs = [ cmake bison flex ] ++ lib.optionals docSupport [

View File

@ -365,14 +365,14 @@
buildPythonPackage rec {
pname = "boto3-stubs";
version = "1.34.36";
version = "1.34.37";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-AvhzNyVC7Seap0a5kIX5UyAyhUeyp7A0R7bZAMZ5XtI=";
hash = "sha256-xmGMcSa6wDN8BeFh6cQo/rxX1qJNf/Yt5G5ndh9ALFc=";
};
nativeBuildInputs = [

View File

@ -9,7 +9,7 @@
buildPythonPackage rec {
pname = "botocore-stubs";
version = "1.34.36";
version = "1.34.37";
format = "pyproject";
disabled = pythonOlder "3.7";
@ -17,7 +17,7 @@ buildPythonPackage rec {
src = fetchPypi {
pname = "botocore_stubs";
inherit version;
hash = "sha256-+VvELnYPQr54AgvmqJ6lzrMHtgRzDiyiVdmMrkhoM40=";
hash = "sha256-1rzqimhyqkbTiQJ9xcAiJB/QogR6i4WKpQBeYVHtMKc=";
};
nativeBuildInputs = [

View File

@ -20,7 +20,7 @@
buildPythonPackage rec {
pname = "es-client";
version = "8.12.4";
version = "8.12.5";
pyproject = true;
disabled = pythonOlder "3.7";
@ -29,7 +29,7 @@ buildPythonPackage rec {
owner = "untergeek";
repo = "es_client";
rev = "refs/tags/v${version}";
hash = "sha256-IjpnukZRDpflk/lh9aSyeuoj/bzZD0jiS1prBKkZwLk=";
hash = "sha256-gaeNIxHnNulUOGhYHf9dIgBSh2rJIdsYdpPT8OTyEdg=";
};
pythonRelaxDeps = true;

View File

@ -11,7 +11,7 @@
buildPythonPackage rec {
pname = "flake8-bugbear";
version = "24.1.17";
version = "24.2.6";
format = "setuptools";
disabled = pythonOlder "3.7";
@ -20,7 +20,7 @@ buildPythonPackage rec {
owner = "PyCQA";
repo = pname;
rev = "refs/tags/${version}";
hash = "sha256-dkegdW+yTZVmtDJDo67dSkLvEFaqvOw17FpZA4JgHN0=";
hash = "sha256-9GuHgRCwHD7YP0XdoFip9rWyPtZtVme+c+nHjvBrB8k=";
};
propagatedBuildInputs = [

View File

@ -19,14 +19,14 @@
buildPythonPackage rec {
pname = "google-cloud-asset";
version = "3.24.0";
version = "3.24.1";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-A9Ov5a6lpcJ+6diVEjFlLKMwROuSKO/lZOuGxN6Nn7U=";
hash = "sha256-aNTCDqj/0/qm4gwZrIKrn2yhgKshv1XwGlHd4zhzMgI=";
};
nativeBuildInputs = [

View File

@ -15,14 +15,14 @@
buildPythonPackage rec {
pname = "google-cloud-dataproc";
version = "5.9.0";
version = "5.9.1";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-flH5yQBbxfG8sjYnFx3pzWJGpEd1EYpIzGMoYSgKdt8=";
hash = "sha256-qDc6E6d6hIHgRBNDGUHaJ7ROP24xDUXK1rkXTX187g0=";
};
nativeBuildInputs = [

View File

@ -12,14 +12,14 @@
buildPythonPackage rec {
pname = "google-cloud-redis";
version = "2.15.0";
version = "2.15.1";
pyproject = true;
disabled = pythonOlder "3.7";
src = fetchPypi {
inherit pname version;
hash = "sha256-EyThUipPk96q5TuJDMKugFSGXDdWi0vOH5EzP2zzcyI=";
hash = "sha256-RTDYMmkRjkP5VhN74Adlvm/vpqXd9lnu3ckjmItIi+Y=";
};
nativeBuildInputs = [

View File

@ -10,7 +10,7 @@
buildPythonPackage rec {
pname = "html-sanitizer";
version = "2.2";
version = "2.3";
format = "pyproject";
disabled = pythonOlder "3.7";
@ -19,7 +19,7 @@ buildPythonPackage rec {
owner = "matthiask";
repo = pname;
rev = "refs/tags/${version}";
hash = "sha256-WU5wdTvCzYEw1eiuTLcFImvydzxWANfmDQCmEgyU9h4=";
hash = "sha256-lQ8E3hdHX0YR3HJUTz1pVBegLo4lhvyiylLVFMDY1+s=";
};
nativeBuildInputs = [

View File

@ -8,15 +8,15 @@
buildPythonPackage rec {
pname = "meteoalertapi";
version = "0.3.0";
version = "0.3.1";
format = "setuptools";
disabled = pythonOlder "3.7";
src = fetchFromGitHub {
owner = "rolfberkenbosch";
repo = "meteoalert-api";
rev = "v${version}";
hash = "sha256-uB2nza9fj7vOWixL4WEQX1N3i2Y80zQPM3x1+gRtg+w=";
rev = "refs/tags/v${version}";
hash = "sha256-Imb4DVcNB3QiVSCLCI+eKpfl73aMn4NIItQVf7p0H+E=";
};
propagatedBuildInputs = [

View File

@ -41,14 +41,14 @@
buildPythonPackage rec {
pname = "spyder";
version = "5.5.0";
version = "5.5.1";
format = "setuptools";
disabled = pythonOlder "3.8";
src = fetchPypi {
inherit pname version;
hash = "sha256-zjQmUmkqwtXNnZKssNpl24p4FQscZKGiiJj5iwYl2UM=";
hash = "sha256-+z8Jj0eA/mYH1r8ZQUyYUFMk7h1mBxjoTD5YZk0cH0k=";
};
patches = [

View File

@ -2,6 +2,7 @@
, buildPythonPackage
, fetchPypi
, lib
, ml-dtypes
, numpy
, python
, stdenv
@ -14,15 +15,17 @@ let
"aarch64-darwin" = "macosx_11_0_arm64";
};
hashes = {
"310-x86_64-linux" = "sha256-Zuy2zBLV950CMbdtpLNpIWqnXHw2jkjrZG48eGtm42w=";
"311-x86_64-linux" = "sha256-Bg5j8QB5z8Ju4bEQsZDojJHTJ4UoQF1pkd4ma83Sc/s=";
"310-aarch64-darwin" = "sha256-6Tta4ru1TnobFa4FXWz8fm9rAxF0G09Y2Pj/KaQPVnE=";
"311-aarch64-darwin" = "sha256-Sb0tv9ZPQJ4n9b0ybpjJWpreQPZvSC5Sd7CXuUwHCn0=";
"310-x86_64-linux" = "sha256-1b6w9wgT6fffTTpJ3MxdPSrFD7Xaby6prQYFljVn4x4=";
"311-x86_64-linux" = "sha256-8+HlzaxH30gB5N+ZKR0Oq+yswhq5gjiSF9jVsg8U22E=";
"312-x86_64-linux" = "sha256-e8iEQzB4D3RSXgrcPC4me/vsFKoXf1QFNZfQ7968zQE=";
"310-aarch64-darwin" = "sha256-2C60yJk/Pbx2woV7hzEmWGzNKWWnySDfTPm247PWIRA=";
"311-aarch64-darwin" = "sha256-rdLB7l/8ZYjV589qKtORiyu1rC7W30wzrsz1uihNRpk=";
"312-aarch64-darwin" = "sha256-DpbYMIbqceQeiL7PYwnvn9jLtv8EmfHXmxvPfZCw914=";
};
in
buildPythonPackage rec {
pname = "tensorstore";
version = "0.1.40";
version = "0.1.53";
format = "wheel";
# The source build involves some wonky Bazel stuff.
@ -38,7 +41,10 @@ buildPythonPackage rec {
nativeBuildInputs = lib.optionals stdenv.isLinux [ autoPatchelfHook ];
propagatedBuildInputs = [ numpy ];
propagatedBuildInputs = [
ml-dtypes
numpy
];
pythonImportsCheck = [ "tensorstore" ];

View File

@ -8,7 +8,7 @@
buildPythonPackage rec {
pname = "tesla-fleet-api";
version = "0.2.7";
version = "0.4.0";
pyproject = true;
disabled = pythonOlder "3.10";
@ -17,7 +17,7 @@ buildPythonPackage rec {
owner = "Teslemetry";
repo = "python-tesla-fleet-api";
rev = "refs/tags/v${version}";
hash = "sha256-yYvC53uBAiLP009HdXdy+FM+tGc5CLQ8OFwP//Zk488=";
hash = "sha256-XAgZxvN6j3p607Yox5omDDOm3n8YSJFAmui8+5jqY5c=";
};
nativeBuildInputs = [

View File

@ -23,13 +23,13 @@ assert builtins.all (x: builtins.elem x [ "node20" ]) nodeRuntimes;
buildDotnetModule rec {
pname = "github-runner";
version = "2.312.0";
version = "2.313.0";
src = fetchFromGitHub {
owner = "actions";
repo = "runner";
rev = "v${version}";
hash = "sha256-gSxo73o/5B6RsR5fNQ8pCv/adXrZdVPwFK4Sjwa3ZIQ=";
hash = "sha256-0CclkbJ8AfffdfVNXacnpgFOS+ONk6eP1LTyFa12xU4=";
leaveDotGit = true;
postFetch = ''
git -C $out rev-parse --short HEAD > $out/.git-revision

View File

@ -19,13 +19,13 @@ let
in stdenv.mkDerivation rec {
pname = "stlink";
version = "1.7.0";
version = "1.8.0";
src = fetchFromGitHub {
owner = "stlink-org";
repo = "stlink";
rev = "v${version}";
sha256 = "03xypffpbp4imrczbxmq69vgkr7mbp0ps9dk815br5wwlz6vgygl";
sha256 = "sha256-hlFI2xpZ4ldMcxZbg/T5/4JuFFdO9THLcU0DQKSFqrw=";
};
patches = [
@ -55,7 +55,8 @@ in stdenv.mkDerivation rec {
meta = with lib; {
description = "In-circuit debug and programming for ST-Link devices";
license = licenses.bsd3;
platforms = platforms.unix;
# upstream says windows, linux/unix but dropped support for macOS in 1.8.0
platforms = platforms.linux; # not sure how to express unix without darwin
maintainers = [ maintainers.bjornfor maintainers.rongcuid ];
};
}

View File

@ -5,15 +5,15 @@
buildGoModule rec {
pname = "opcr-policy";
version = "0.2.8";
version = "0.2.9";
src = fetchFromGitHub {
owner = "opcr-io";
repo = "policy";
rev = "v${version}";
sha256 = "sha256-JNWI7PCGuZ3uLqglrR08nOumpbX2CxyVBYbUJJwptoU=";
sha256 = "sha256-3ubbCPliBFe+sOQxAkQr4bJJiMvbDwDaJO/hOa88P5w=";
};
vendorHash = "sha256-S4HFIuWWb+7QhwUg28Kt5IEH3j82tzJv8K5EqSYq1eA=";
vendorHash = "sha256-oxcyKVdiTJYypgrBmH1poWc21xDyTBHk781TbA7i2gc=";
ldflags = [ "-s" "-w" "-X github.com/opcr-io/policy/pkg/version.ver=${version}" ];

View File

@ -11,16 +11,16 @@
rustPlatform.buildRustPackage rec {
pname = "reindeer";
version = "unstable-2024-01-30";
version = "unstable-2024-02-03";
src = fetchFromGitHub {
owner = "facebookincubator";
repo = pname;
rev = "2fe0af4d3b637d3dfff41d26623a4596a7b5fdb0";
sha256 = "sha256-MUMqM6BZUECxdOkBdeNMEE88gJumKI/ODo+1hmmrCHY=";
rev = "8dd5629ef78d359fd8d3527157b0375762f22b1e";
sha256 = "sha256-9WmhP8CyjwohlltfmUn5m29CmBucIH+XrfVjIJX7dS8=";
};
cargoSha256 = "sha256-fquvUq9MjC7J24wuZR+voUkm3F7eMy1ELxMuELlQaus=";
cargoSha256 = "sha256-W9YA9OZu71/bSx3EwMeueVQSTExeep+UKGYCD8c4yhc=";
nativeBuildInputs = [ pkg-config ];
buildInputs =

View File

@ -2,13 +2,13 @@
buildGoModule rec {
pname = "skaffold";
version = "2.10.0";
version = "2.10.1";
src = fetchFromGitHub {
owner = "GoogleContainerTools";
repo = "skaffold";
rev = "v${version}";
hash = "sha256-onJ/WEGsDhIfM+y3OeVbWjZSYHc7oWlkbLCrbLm8JZk=";
hash = "sha256-NNiWiTY5AHMcGxDND5QwlucYVrp94C92qtMNLrVm2tQ=";
};
vendorHash = null;

View File

@ -8,13 +8,13 @@
}:
buildGoModule rec {
pname = "turso-cli";
version = "0.88.3";
version = "0.88.6";
src = fetchFromGitHub {
owner = "tursodatabase";
repo = "turso-cli";
rev = "v${version}";
hash = "sha256-tPeoLGYJRMXFVI09fupspdQMSMjF2Trdo2GlkoWs7wA=";
hash = "sha256-u8TZFgeDeZVRcP4ICgUrI4qhqlL1lhTSVDmWK3Ozku4=";
};
vendorHash = "sha256-rTeW2RQhcdwJTAMQELm4cdObJbm8gk/I2Qz3Wk3+zpI=";

View File

@ -2,13 +2,13 @@
buildGoModule rec {
pname = "upbound";
version = "0.24.0";
version = "0.24.1";
src = fetchFromGitHub {
owner = pname;
repo = "up";
rev = "v${version}";
sha256 = "sha256-AtaO4O9UbA44OnZl5ARQkyPHoBYnuwdFd91DEZPN+Co=";
sha256 = "sha256-1WSkNL1XpgnkWeL4tDiOxoKX6N5LYepD3DU0109pWC4=";
};
vendorHash = "sha256-jHVwI5fQbS/FhRptRXtNezG1djaZKHJgpPJfuEH/zO0=";

View File

@ -1,6 +1,5 @@
{ lib
, fetchzip
, libXxf86vm
, libGL
, makeWrapper
, openal
@ -48,7 +47,8 @@ stdenv.mkDerivation rec {
cp -r ./* $out/share/starsector
mkdir -p $out/share/icons/hicolor/64x64/apps
ln -s $out/graphics/ui/s_icon64.png $out/share/icons/hicolor/64x64/apps/starsector.png
ln -s $out/share/starsector/graphics/ui/s_icon64.png \
$out/share/icons/hicolor/64x64/apps/starsector.png
wrapProgram $out/share/starsector/starsector.sh \
--prefix PATH : ${lib.makeBinPath [ openjdk xorg.xrandr ]} \
@ -71,14 +71,6 @@ stdenv.mkDerivation rec {
--replace "-XX:+CompilerThreadHintNoPreempt" "-XX:+UnlockDiagnosticVMOptions -XX:-BytecodeVerificationRemote -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSConcurrentMTEnabled -XX:+DisableExplicitGC"
'';
meta = with lib; {
description = "Open-world single-player space-combat, roleplaying, exploration, and economic game";
homepage = "https://fractalsoftworks.com";
sourceProvenance = with sourceTypes; [ binaryBytecode ];
license = licenses.unfree;
maintainers = with maintainers; [ bbigras rafaelrc ];
};
passthru.updateScript = writeScript "starsector-update-script" ''
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p curl gnugrep common-updater-scripts
@ -86,4 +78,12 @@ stdenv.mkDerivation rec {
version=$(curl -s https://fractalsoftworks.com/preorder/ | grep -oP "https://f005.backblazeb2.com/file/fractalsoftworks/release/starsector_linux-\K.*?(?=\.zip)" | head -1)
update-source-version ${pname} "$version" --file=./pkgs/games/starsector/default.nix
'';
meta = with lib; {
description = "Open-world single-player space-combat, roleplaying, exploration, and economic game";
homepage = "https://fractalsoftworks.com";
sourceProvenance = with sourceTypes; [ binaryBytecode ];
license = licenses.unfree;
maintainers = with maintainers; [ bbigras rafaelrc ];
};
}

View File

@ -140,6 +140,16 @@ let
};
});
anova-wifi = super.anova-wifi.overridePythonAttrs (old: rec {
version = "0.10.3";
src = fetchFromGitHub {
owner = "Lash-L";
repo = "anova_wifi";
rev = "refs/tags/v${version}";
hash = "sha256-tCmvp29KSCkc+g0w0odcB7vGjtDx6evac7XsHEF0syM=";
};
});
astral = super.astral.overridePythonAttrs (oldAttrs: rec {
pname = "astral";
version = "2.2";

View File

@ -6,13 +6,13 @@
buildGoModule rec {
pname = "pocketbase";
version = "0.21.1";
version = "0.21.2";
src = fetchFromGitHub {
owner = "pocketbase";
repo = "pocketbase";
rev = "v${version}";
hash = "sha256-f8lqDYu2tlwp+/00QaHfXvUO3CZuDWMpdVBrUW3bbio=";
hash = "sha256-EOj+x6n0ww6al57X4mDM4T9/3Za5w8N/Bno5Trlb5dY=";
};
vendorHash = "sha256-u7VgZkv9Ajtra9ikeIxJRLZleH+rzs1g2SZO9zj/bes=";

View File

@ -4,13 +4,13 @@ let
pythonEnv = python3.withPackages(ps: with ps; [ cheetah3 lxml ]);
in stdenv.mkDerivation rec {
pname = "sickgear";
version = "3.30.8";
version = "3.30.9";
src = fetchFromGitHub {
owner = "SickGear";
repo = "SickGear";
rev = "release_${version}";
hash = "sha256-U/lzTzvvMdnid2AHJ6fK3GHVqsr1h7X330RkR4jNTuQ=";
hash = "sha256-Ik+A7CqSRsXPzqbgmwpam7v2hyj6BweyWJnF5ix/JNg=";
};
patches = [

View File

@ -1,5 +1,5 @@
# Takes a path to nixpkgs and a path to the json-encoded list of attributes to check.
# Returns an value containing information on each requested attribute,
# Takes a path to nixpkgs and a path to the json-encoded list of `pkgs/by-name` attributes.
# Returns a value containing information on all Nixpkgs attributes
# which is decoded on the Rust side.
# See ./eval.rs for the meaning of the returned values
{
@ -9,33 +9,28 @@
let
attrs = builtins.fromJSON (builtins.readFile attrsPath);
nixpkgsPathLength = builtins.stringLength (toString nixpkgsPath) + 1;
removeNixpkgsPrefix = builtins.substring nixpkgsPathLength (-1);
# We need access to the `callPackage` arguments of each attribute.
# The only way to do so is to override `callPackage` with our own version that adds this information to the result,
# and then try to access this information.
# We need to check whether attributes are defined manually e.g. in
# `all-packages.nix`, automatically by the `pkgs/by-name` overlay, or
# neither. The only way to do so is to override `callPackage` and
# `_internalCallByNamePackageFile` with our own version that adds this
# information to the result, and then try to access it.
overlay = final: prev: {
# Information for attributes defined using `callPackage`
# Adds information to each attribute about whether it's manually defined using `callPackage`
callPackage = fn: args:
addVariantInfo (prev.callPackage fn args) {
Manual = {
path =
if builtins.isPath fn then
removeNixpkgsPrefix (toString fn)
else
null;
empty_arg =
args == { };
};
# This is a manual definition of the attribute, and it's a callPackage, specifically a semantic callPackage
ManualDefinition.is_semantic_call_package = true;
};
# Information for attributes that are auto-called from pkgs/by-name.
# This internal attribute is only used by pkgs/by-name
# Adds information to each attribute about whether it's automatically
# defined by the `pkgs/by-name` overlay. This internal attribute is only
# used by that overlay.
# This overrides the above `callPackage` information (we don't need that
# one, since `pkgs/by-name` always uses `callPackage` underneath.
_internalCallByNamePackageFile = file:
addVariantInfo (prev._internalCallByNamePackageFile file) {
Auto = null;
AutoDefinition = null;
};
};
@ -50,7 +45,7 @@ let
else
# It's very rare that callPackage doesn't return an attribute set, but it can occur.
# In such a case we can't really return anything sensible that would include the info,
# so just don't return the info and let the consumer handle it.
# so just don't return the value directly and treat it as if it wasn't a callPackage.
value;
pkgs = import nixpkgsPath {
@ -62,30 +57,33 @@ let
system = "x86_64-linux";
};
attrInfo = name: value:
if ! builtins.isAttrs value then
{
NonAttributeSet = null;
}
else if ! value ? _callPackageVariant then
{
NonCallPackage = null;
}
else
{
CallPackage = {
call_package_variant = value._callPackageVariant;
is_derivation = pkgs.lib.isDerivation value;
location = builtins.unsafeGetAttrPos name pkgs;
# See AttributeInfo in ./eval.rs for the meaning of this
attrInfo = name: value: {
location = builtins.unsafeGetAttrPos name pkgs;
attribute_variant =
if ! builtins.isAttrs value then
{ NonAttributeSet = null; }
else
{
AttributeSet = {
is_derivation = pkgs.lib.isDerivation value;
definition_variant =
if ! value ? _callPackageVariant then
{ ManualDefinition.is_semantic_call_package = false; }
else
value._callPackageVariant;
};
};
};
};
# Information on all attributes that are in pkgs/by-name.
byNameAttrs = builtins.listToAttrs (map (name: {
inherit name;
value.ByName =
if ! pkgs ? ${name} then
{ Missing = null; }
else
# Evaluation failures are not allowed, so don't try to catch them
{ Existing = attrInfo name pkgs.${name}; };
}) attrs);
@ -93,6 +91,8 @@ let
# We need this to enforce pkgs/by-name for new packages
nonByNameAttrs = builtins.mapAttrs (name: value:
let
# Packages outside `pkgs/by-name` often fail evaluation,
# so we need to handle that
output = attrInfo name value;
result = builtins.tryEval (builtins.deepSeq output null);
in

View File

@ -37,24 +37,13 @@ enum ByNameAttribute {
}
#[derive(Deserialize)]
enum AttributeInfo {
/// The attribute exists, but its value isn't an attribute set
NonAttributeSet,
/// The attribute exists, but its value isn't defined using callPackage
NonCallPackage,
/// The attribute exists and its value is an attribute set
CallPackage(CallPackageInfo),
}
#[derive(Deserialize)]
struct CallPackageInfo {
call_package_variant: CallPackageVariant,
/// Whether the attribute is a derivation (`lib.isDerivation`)
is_derivation: bool,
struct AttributeInfo {
/// The location of the attribute as returned by `builtins.unsafeGetAttrPos`
location: Option<Location>,
attribute_variant: AttributeVariant,
}
/// The structure returned by `builtins.unsafeGetAttrPos`
/// The structure returned by a successful `builtins.unsafeGetAttrPos`
#[derive(Deserialize, Clone, Debug)]
struct Location {
pub file: PathBuf,
@ -63,17 +52,28 @@ struct Location {
}
#[derive(Deserialize)]
enum CallPackageVariant {
/// The attribute is auto-called as pkgs.callPackage using pkgs/by-name,
/// and it is not overridden by a definition in all-packages.nix
Auto,
/// The attribute is defined as a pkgs.callPackage <path> <args>,
/// and overridden by all-packages.nix
Manual {
/// The <path> argument or None if it's not a path
path: Option<PathBuf>,
/// true if <args> is { }
empty_arg: bool,
pub enum AttributeVariant {
/// The attribute is not an attribute set, we're limited in the amount of information we can get
/// from it (though it's obviously not a derivation)
NonAttributeSet,
AttributeSet {
/// Whether the attribute is a derivation (`lib.isDerivation`)
is_derivation: bool,
/// The type of callPackage
definition_variant: DefinitionVariant,
},
}
#[derive(Deserialize)]
pub enum DefinitionVariant {
/// An automatic definition by the `pkgs/by-name` overlay
/// Though it's detected using the internal _internalCallByNamePackageFile attribute,
/// which can in theory also be used by other code
AutoDefinition,
/// A manual definition of the attribute, typically in `all-packages.nix`
ManualDefinition {
/// Whether the attribute is defined as `pkgs.callPackage ...` or something else.
is_semantic_call_package: bool,
},
}
@ -165,9 +165,12 @@ pub fn check_values(
nix_file_store,
non_by_name_attribute,
)?,
Attribute::ByName(by_name_attribute) => {
by_name(&attribute_name, by_name_attribute)
}
Attribute::ByName(by_name_attribute) => by_name(
nix_file_store,
nixpkgs_path,
&attribute_name,
by_name_attribute,
)?,
};
Ok::<_, anyhow::Error>(check_result.map(|value| (attribute_name.clone(), value)))
})
@ -183,78 +186,168 @@ pub fn check_values(
/// Handles the evaluation result for an attribute in `pkgs/by-name`,
/// turning it into a validation result.
fn by_name(
nix_file_store: &mut NixFileStore,
nixpkgs_path: &Path,
attribute_name: &str,
by_name_attribute: ByNameAttribute,
) -> validation::Validation<ratchet::Package> {
) -> validation::Result<ratchet::Package> {
use ratchet::RatchetState::*;
use AttributeInfo::*;
use ByNameAttribute::*;
use CallPackageVariant::*;
let relative_package_file = structure::relative_file_for_package(attribute_name);
let absolute_package_file = nixpkgs_path.join(&relative_package_file);
match by_name_attribute {
Missing => NixpkgsProblem::UndefinedAttr {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
// At this point we know that `pkgs/by-name/fo/foo/package.nix` has to exists.
// This match decides whether the attribute `foo` is defined accordingly
// and whether a legacy manual definition could be removed
let manual_definition_result = match by_name_attribute {
// The attribute is missing
Missing => {
// This indicates a bug in the `pkgs/by-name` overlay, because it's supposed to
// automatically defined attributes in `pkgs/by-name`
NixpkgsProblem::UndefinedAttr {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
}
.into()
}
.into(),
Existing(NonAttributeSet) => NixpkgsProblem::NonDerivation {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
// The attribute exists
Existing(AttributeInfo {
// But it's not an attribute set, which limits the amount of information we can get
// about this attribute (see ./eval.nix)
attribute_variant: AttributeVariant::NonAttributeSet,
location: _location,
}) => {
// The only thing we know is that it's definitely not a derivation, since those are
// always attribute sets.
//
// We can't know whether the attribute is automatically or manually defined for sure,
// and while we could check the location, the error seems clear enough as is.
NixpkgsProblem::NonDerivation {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
}
.into()
}
.into(),
Existing(NonCallPackage) => NixpkgsProblem::WrongCallPackage {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
}
.into(),
Existing(CallPackage(CallPackageInfo {
is_derivation,
call_package_variant,
..
})) => {
let check_result = if !is_derivation {
// The attribute exists
Existing(AttributeInfo {
// And it's an attribute set, which allows us to get more information about it
attribute_variant:
AttributeVariant::AttributeSet {
is_derivation,
definition_variant,
},
location,
}) => {
// Only derivations are allowed in `pkgs/by-name`
let is_derivation_result = if is_derivation {
Success(())
} else {
NixpkgsProblem::NonDerivation {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
}
.into()
} else {
Success(())
};
check_result.and(match &call_package_variant {
Auto => Success(ratchet::Package {
manual_definition: Tight,
uses_by_name: Tight,
}),
// TODO: Use the call_package_argument_info_at instead/additionally and
// simplify the eval.nix code
Manual { path, empty_arg } => {
let correct_file = if let Some(call_package_path) = path {
relative_package_file == *call_package_path
// If the definition looks correct
let variant_result = match definition_variant {
// An automatic `callPackage` by the `pkgs/by-name` overlay.
// Though this gets detected by checking whether the internal
// `_internalCallByNamePackageFile` was used
DefinitionVariant::AutoDefinition => {
if let Some(_location) = location {
// Such an automatic definition should definitely not have a location
// Having one indicates that somebody is using `_internalCallByNamePackageFile`,
NixpkgsProblem::InternalCallPackageUsed {
attr_name: attribute_name.to_owned(),
}
.into()
} else {
false
};
Success(Tight)
}
}
// The attribute is manually defined, e.g. in `all-packages.nix`.
// This means we need to enforce it to look like this:
// callPackage ../pkgs/by-name/fo/foo/package.nix { ... }
DefinitionVariant::ManualDefinition {
is_semantic_call_package,
} => {
// We should expect manual definitions to have a location, otherwise we can't
// enforce the expected format
if let Some(location) = location {
// Parse the Nix file in the location and figure out whether it's an
// attribute definition of the form `= callPackage <arg1> <arg2>`,
// returning the arguments if so.
let optional_syntactic_call_package = nix_file_store
.get(&location.file)?
.call_package_argument_info_at(
location.line,
location.column,
// We're passing `pkgs/by-name/fo/foo/package.nix` here, which causes
// the function to verify that `<arg1>` is the same path,
// making `syntactic_call_package.relative_path` end up as `""`
// TODO: This is confusing and should be improved
&absolute_package_file,
)?;
if correct_file {
Success(ratchet::Package {
// Empty arguments for non-auto-called packages are not allowed anymore.
manual_definition: if *empty_arg { Loose(()) } else { Tight },
uses_by_name: Tight,
})
// At this point, we completed two different checks for whether it's a
// `callPackage`
match (is_semantic_call_package, optional_syntactic_call_package) {
// Something like `<attr> = { ... }`
// or a `pkgs.callPackage` but with the wrong file
(false, None)
// Something like `<attr> = pythonPackages.callPackage ./pkgs/by-name/...`
| (false, Some(_))
// Something like `<attr> = bar` where `bar = pkgs.callPackage ...`
// or a `callPackage` but with the wrong file
| (true, None) => {
// All of these are not of the expected form, so error out
// TODO: Make error more specific, don't lump everything together
NixpkgsProblem::WrongCallPackage {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
}.into()
}
// Something like `<attr> = pkgs.callPackage ./pkgs/by-name/...`,
// with the correct file
(true, Some(syntactic_call_package)) => {
Success(
// Manual definitions with empty arguments are not allowed
// anymore
if syntactic_call_package.empty_arg {
Loose(())
} else {
Tight
}
)
}
}
} else {
NixpkgsProblem::WrongCallPackage {
relative_package_file: relative_package_file.to_owned(),
package_name: attribute_name.to_owned(),
// If manual definitions don't have a location, it's likely `mapAttrs`'d
// over, e.g. if it's defined in aliases.nix.
// We can't verify whether its of the expected `callPackage`, so error out
NixpkgsProblem::CannotDetermineAttributeLocation {
attr_name: attribute_name.to_owned(),
}
.into()
}
}
})
};
// Independently report problems about whether it's a derivation and the callPackage variant
is_derivation_result.and(variant_result)
}
}
};
Ok(
// Packages being checked in this function are _always_ already defined in `pkgs/by-name`,
// so instead of repeating ourselves all the time to define `uses_by_name`, just set it
// once at the end with a map
manual_definition_result.map(|manual_definition| ratchet::Package {
manual_definition,
uses_by_name: Tight,
}),
)
}
/// Handles the evaluation result for an attribute _not_ in `pkgs/by-name`,
@ -265,113 +358,117 @@ fn handle_non_by_name_attribute(
non_by_name_attribute: NonByNameAttribute,
) -> validation::Result<ratchet::Package> {
use ratchet::RatchetState::*;
use AttributeInfo::*;
use CallPackageVariant::*;
use NonByNameAttribute::*;
Ok(match non_by_name_attribute {
// The attribute succeeds evaluation and is NOT defined in pkgs/by-name
EvalSuccess(attribute_info) => {
let uses_by_name = match attribute_info {
// In these cases the package doesn't qualify for being in pkgs/by-name,
// so the UsesByName ratchet is already as tight as it can be
NonAttributeSet => Success(NonApplicable),
NonCallPackage => Success(NonApplicable),
// This is the case when the `pkgs/by-name`-internal _internalCallByNamePackageFile
// is used for a package outside `pkgs/by-name`
CallPackage(CallPackageInfo {
call_package_variant: Auto,
..
}) => {
// With the current detection mechanism, this also triggers for aliases
// to pkgs/by-name packages, and there's no good method of
// distinguishing alias vs non-alias.
// Using `config.allowAliases = false` at least currently doesn't work
// because there's nothing preventing people from defining aliases that
// are present even with that disabled.
// In the future we could kind of abuse this behavior to have better
// enforcement of conditional aliases, but for now we just need to not
// give an error.
Success(NonApplicable)
}
// Only derivations can be in pkgs/by-name,
// so this attribute doesn't qualify
CallPackage(CallPackageInfo {
is_derivation: false,
..
}) => Success(NonApplicable),
// A location of None indicates something weird, we can't really know where
// this attribute is defined, probably an alias
CallPackage(CallPackageInfo { location: None, .. }) => Success(Tight),
// The case of an attribute that qualifies:
// - Uses callPackage
// - Is a derivation
CallPackage(CallPackageInfo {
is_derivation: true,
call_package_variant: Manual { .. },
location: Some(location),
}) =>
// We'll use the attribute's location to parse the file that defines it
{
match nix_file_store
.get(&location.file)?
.call_package_argument_info_at(
location.line,
location.column,
nixpkgs_path,
)? {
// If the definition is not of the form `<attr> = callPackage <arg1> <arg2>;`,
// it's generally not possible to migrate to `pkgs/by-name`
None => Success(NonApplicable),
Some(call_package_argument_info) => {
if let Some(ref rel_path) = call_package_argument_info.relative_path {
if rel_path.starts_with(utils::BASE_SUBPATH) {
// Package variants of by-name packages are explicitly allowed according to RFC 140
// https://github.com/NixOS/rfcs/blob/master/rfcs/0140-simple-package-paths.md#package-variants:
//
// foo-variant = callPackage ../by-name/fo/foo/package.nix {
// someFlag = true;
// }
//
// While such definitions could be moved to `pkgs/by-name` by using
// `.override { someFlag = true; }` instead, this changes the semantics in
// relation with overlays.
Success(NonApplicable)
} else {
Success(Loose(call_package_argument_info))
}
} else {
Success(Loose(call_package_argument_info))
}
}
// The ratchet state whether this attribute uses `pkgs/by-name`.
// This is never `Tight`, because we only either:
// - Know that the attribute _could_ be migrated to `pkgs/by-name`, which is `Loose`
// - Or we're unsure, in which case we use NonApplicable
let uses_by_name =
// This is a big ol' match on various properties of the attribute
// First, it needs to succeed evaluation. We can't know whether an attribute could be
// migrated to `pkgs/by-name` if it doesn't evaluate, since we need to check that it's a
// derivation.
//
// This only has the minor negative effect that if a PR that breaks evaluation
// gets merged, fixing those failures won't force anything into `pkgs/by-name`.
//
// For now this isn't our problem, but in the future we
// might have another check to enforce that evaluation must not be broken.
//
// The alternative of assuming that failing attributes would have been fit for `pkgs/by-name`
// has the problem that if a package evaluation gets broken temporarily,
// fixing it requires a move to pkgs/by-name, which could happen more
// often and isn't really justified.
if let EvalSuccess(AttributeInfo {
// We're only interested in attributes that are attribute sets (which includes
// derivations). Anything else can't be in `pkgs/by-name`.
attribute_variant: AttributeVariant::AttributeSet {
// Indeed, we only care about derivations, non-derivation attribute sets can't be
// in `pkgs/by-name`
is_derivation: true,
// Of the two definition variants, really only the manual one makes sense here.
// Special cases are:
// - Manual aliases to auto-called packages are not treated as manual definitions,
// due to limitations in the semantic callPackage detection. So those should be
// ignored.
// - Manual definitions using the internal _internalCallByNamePackageFile are
// not treated as manual definitions, since _internalCallByNamePackageFile is
// used to detect automatic ones. We can't distinguish from the above case, so we
// just need to ignore this one too, even if that internal attribute should never
// be called manually.
definition_variant: DefinitionVariant::ManualDefinition { is_semantic_call_package }
},
// We need the location of the manual definition, because otherwise
// we can't figure out whether it's a syntactic callPackage
location: Some(location),
}) = non_by_name_attribute {
// Parse the Nix file in the location and figure out whether it's an
// attribute definition of the form `= callPackage <arg1> <arg2>`,
// returning the arguments if so.
let optional_syntactic_call_package = nix_file_store
.get(&location.file)?
.call_package_argument_info_at(
location.line,
location.column,
// Passing the Nixpkgs path here both checks that the <arg1> is within Nixpkgs, and
// strips the absolute Nixpkgs path from it, such that
// syntactic_call_package.relative_path is relative to Nixpkgs
nixpkgs_path
)?;
// At this point, we completed two different checks for whether it's a
// `callPackage`
match (is_semantic_call_package, optional_syntactic_call_package) {
// Something like `<attr> = { }`
(false, None)
// Something like `<attr> = pythonPackages.callPackage ...`
| (false, Some(_))
// Something like `<attr> = bar` where `bar = pkgs.callPackage ...`
| (true, None) => {
// In all of these cases, it's not possible to migrate the package to `pkgs/by-name`
NonApplicable
}
// Something like `<attr> = pkgs.callPackage ...`
(true, Some(syntactic_call_package)) => {
// It's only possible to migrate such a definitions if..
match syntactic_call_package.relative_path {
Some(ref rel_path) if rel_path.starts_with(utils::BASE_SUBPATH) => {
// ..the path is not already within `pkgs/by-name` like
//
// foo-variant = callPackage ../by-name/fo/foo/package.nix {
// someFlag = true;
// }
//
// While such definitions could be moved to `pkgs/by-name` by using
// `.override { someFlag = true; }` instead, this changes the semantics in
// relation with overlays, so migration is generally not possible.
//
// See also "package variants" in RFC 140:
// https://github.com/NixOS/rfcs/blob/master/rfcs/0140-simple-package-paths.md#package-variants
NonApplicable
}
_ => {
// Otherwise, the path is outside `pkgs/by-name`, which means it can be
// migrated
Loose(syntactic_call_package)
}
}
};
uses_by_name.map(|x| ratchet::Package {
manual_definition: Tight,
uses_by_name: x,
})
}
}
EvalFailure => {
// We don't know anything about this attribute really
Success(ratchet::Package {
// We'll assume that we can't remove any manual definitions, which has the
// minimal drawback that if there was a manual definition that could've
// been removed, fixing the package requires removing the definition, no
// big deal, that's a minor edit.
manual_definition: Tight,
// Regarding whether this attribute could `pkgs/by-name`, we don't really
// know, so return NonApplicable, which has the effect that if a
// package evaluation gets broken temporarily, the fix can remove it from
// pkgs/by-name again. For now this isn't our problem, but in the future we
// might have another check to enforce that evaluation must not be broken.
// The alternative of assuming that it's using `pkgs/by-name` already
// has the problem that if a package evaluation gets broken temporarily,
// fixing it requires a move to pkgs/by-name, which could happen more
// often and isn't really justified.
uses_by_name: NonApplicable,
})
}
})
} else {
// This catches all the cases not matched by the above `if let`, falling back to not being
// able to migrate such attributes
NonApplicable
};
Ok(Success(ratchet::Package {
// Packages being checked in this function _always_ need a manual definition, because
// they're not using `pkgs/by-name` which would allow avoiding it.
// so instead of repeating ourselves all the time to define `manual_definition`,
// just set it once at the end here
manual_definition: Tight,
uses_by_name,
}))
}

View File

@ -92,6 +92,12 @@ pub enum NixpkgsProblem {
call_package_path: Option<PathBuf>,
empty_arg: bool,
},
InternalCallPackageUsed {
attr_name: String,
},
CannotDetermineAttributeLocation {
attr_name: String,
},
}
impl fmt::Display for NixpkgsProblem {
@ -252,6 +258,16 @@ impl fmt::Display for NixpkgsProblem {
structure::relative_file_for_package(package_name).display(),
)
},
}
NixpkgsProblem::InternalCallPackageUsed { attr_name } =>
write!(
f,
"pkgs.{attr_name}: This attribute is defined using `_internalCallByNamePackageFile`, which is an internal function not intended for manual use.",
),
NixpkgsProblem::CannotDetermineAttributeLocation { attr_name } =>
write!(
f,
"pkgs.{attr_name}: Cannot determine the location of this attribute using `builtins.unsafeGetAttrPos`.",
),
}
}
}

View File

@ -1,3 +1,4 @@
self: super: {
foo = self._internalCallByNamePackageFile ./foo.nix;
bar = self._internalCallByNamePackageFile ./foo.nix;
}

View File

@ -0,0 +1 @@
pkgs.foo: This attribute is defined using `_internalCallByNamePackageFile`, which is an internal function not intended for manual use.

View File

@ -0,0 +1 @@
{ someDrv }: someDrv

View File

@ -0,0 +1,6 @@
self: super: {
bar = (x: x) self.callPackage ./pkgs/by-name/fo/foo/package.nix { someFlag = true; };
foo = self.bar;
}

View File

@ -0,0 +1 @@
import <test-nixpkgs> { root = ./.; }

View File

@ -0,0 +1 @@
pkgs.foo: This attribute is manually defined (most likely in pkgs/top-level/all-packages.nix), which is only allowed if the definition is of the form `pkgs.callPackage pkgs/by-name/fo/foo/package.nix { ... }` with a non-empty second argument.

View File

@ -0,0 +1 @@
{ someDrv, someFlag }: someDrv

View File

@ -0,0 +1,3 @@
self: super: builtins.mapAttrs (name: value: value) {
foo = self.someDrv;
}

View File

@ -0,0 +1 @@
import <test-nixpkgs> { root = ./.; }

View File

@ -0,0 +1 @@
pkgs.foo: Cannot determine the location of this attribute using `builtins.unsafeGetAttrPos`.

View File

@ -0,0 +1 @@
{ someDrv }: someDrv

View File

@ -7,16 +7,16 @@
buildGoModule rec {
pname = "gh-dash";
version = "3.12.0";
version = "3.13.0";
src = fetchFromGitHub {
owner = "dlvhdr";
repo = "gh-dash";
rev = "v${version}";
hash = "sha256-ijqEsjBNncrtg1DaVvwH2gxTgB3QOJCF1RxetnPBVII=";
hash = "sha256-JbKDzRpOaiieTPs8rbFUApcPvkYEF0Gq8AHboALCEcA=";
};
vendorHash = "sha256-ezxwUfI8FevfeRmXN4Og9Gfw1GX9noagzWIg6GSPOPc=";
vendorHash = "sha256-+H94d7OBYQ8vh302xyj3LeCuU78OBv7l0nxC9Cg07uk=";
ldflags = [
"-s"

View File

@ -5,14 +5,14 @@
rustPlatform.buildRustPackage rec {
pname = "grass";
version = "0.13.1";
version = "0.13.2";
src = fetchCrate {
inherit pname version;
hash = "sha256-IJ8kiSvuKR9f3I7TdE263cnQiARzDzfj30uL1PzdZ1s=";
hash = "sha256-JFfNj+IMwIZ+DkaCy3mobSAaq4YphhMpGkx/P33UdJE=";
};
cargoHash = "sha256-WRXoXB/HJkAnUKboCR9Gl2Au/1EivYQhF5rKr7PFe+s=";
cargoHash = "sha256-WzG+yOjxTX2ms2JMpZJYcaKZw0gc9g6/OUe/T7oyK20=";
# tests require rust nightly
doCheck = false;

View File

@ -5,16 +5,16 @@
buildNpmPackage rec {
pname = "cdxgen";
version = "10.0.4";
version = "10.0.5";
src = fetchFromGitHub {
owner = "AppThreat";
repo = pname;
rev = "v${version}";
sha256 = "sha256-P4F1nCMWvzy65iOYVs7AkC93cftN1Z/BSFsJxOEcQp4=";
sha256 = "sha256-0cRJdhP0OtzaV2NqRfoYz+Gkl+N3/REbPiOh0jQySK8=";
};
npmDepsHash = "sha256-9T6Dm8IxL8LB8Qx1wRaog6ZDRF6xYO+GteTrhjjxtns=";
npmDepsHash = "sha256-AlO3AC03JVTbgqdFSJb2L/QYuMQxjqzGGZYapte0uxc=";
dontNpmBuild = true;

View File

@ -7,13 +7,13 @@
buildGoModule rec {
pname = "grype";
version = "0.74.4";
version = "0.74.5";
src = fetchFromGitHub {
owner = "anchore";
repo = pname;
rev = "refs/tags/v${version}";
hash = "sha256-jBBiwsmQDbzay2C6uLM2uzPvTbD+3t8+jyBkEfHwohQ=";
hash = "sha256-h68LfKQG5xgFIFkyuK9Z6tw8+xoimnF2d2QgTjwU74U=";
# populate values that require us to use git. By doing this in postFetch we
# can delete .git afterwards and maintain better reproducibility of the src.
leaveDotGit = true;
@ -28,7 +28,7 @@ buildGoModule rec {
proxyVendor = true;
vendorHash = "sha256-w0dqgyJvn7UZYoUII9jxTuiBOq+HENaQlxfP+rZdpS0=";
vendorHash = "sha256-lnOF3Xvjc20aFPOf9of3n+aBHvPrLTTlH7aPPlYA/RA=";
nativeBuildInputs = [
installShellFiles

View File

@ -11,13 +11,13 @@
stdenv.mkDerivation rec {
pname = "mokutil";
version = "0.6.0";
version = "0.7.0";
src = fetchFromGitHub {
owner = "lcp";
repo = pname;
rev = version;
sha256 = "sha256-qwSEv14mMpaKmm6RM882JzEnBQG3loqsoglg4qTFWUg=";
sha256 = "sha256-PB/VwOJD0DxAioPDYfk2ZDzcN+pSXfUC86hGq2kYhts=";
};
nativeBuildInputs = [

View File

@ -7,13 +7,13 @@
buildGoModule rec {
pname = "trufflehog";
version = "3.67.2";
version = "3.67.4";
src = fetchFromGitHub {
owner = "trufflesecurity";
repo = "trufflehog";
rev = "refs/tags/v${version}";
hash = "sha256-LEwrYzbUHyiLLfOu76zuQA5QwCRv2qV0Pf6pjpM/q0c=";
hash = "sha256-SdOXHsd10nKD8Am5v3WUrptsHbUOe07i1bNwrHhWKpM=";
};
vendorHash = "sha256-tYW6MP1ayF6ExM1XQVA6AeRzXNdqzQLeYIqo85jKLz4=";

View File

@ -2,16 +2,16 @@
rustPlatform.buildRustPackage rec {
pname = "riffdiff";
version = "2.30.0";
version = "2.30.1";
src = fetchFromGitHub {
owner = "walles";
repo = "riff";
rev = version;
hash = "sha256-P+Q0KrJSXc26LcIHFzzypMwjWsJvYGYFZ/6RsB+ELTA=";
hash = "sha256-+bYQrZBbMnlDRZBM252i3dvSpLfW/ys4bBe9mDCvHuU=";
};
cargoHash = "sha256-fhJZMvxGjNfhHP3vMVYUYpA4i5r7w0B0TXaxDZ5Z2YY=";
cargoHash = "sha256-aJc3OcnSE4xo8FdSVt4YYX3i5NZT9GaczlFrbCw+iRo=";
meta = with lib; {
description = "A diff filter highlighting which line parts have changed";

View File

@ -46,11 +46,11 @@ in
stdenv.mkDerivation (finalAttrs: {
pname = "sile";
version = "0.14.16";
version = "0.14.17";
src = fetchurl {
url = "https://github.com/sile-typesetter/sile/releases/download/v${finalAttrs.version}/sile-${finalAttrs.version}.tar.xz";
sha256 = "sha256-z5dYW33Pd9meMo9s3OcaQHAyT+AB94dvcw+gTGySOFc=";
sha256 = "sha256-f4m+3s7au1FoJQrZ3YDAntKJyOiMPQ11bS0dku4GXgQ=";
};
configureFlags = [

View File

@ -25928,12 +25928,12 @@ with pkgs;
# Steel Bank Common Lisp
sbcl_2_4_0 = wrapLisp {
pkg = callPackage ../by-name/sb/sbcl/package.nix { version = "2.4.0"; };
pkg = callPackage ../development/compilers/sbcl { version = "2.4.0"; };
faslExt = "fasl";
flags = [ "--dynamic-space-size" "3000" ];
};
sbcl_2_4_1 = wrapLisp {
pkg = callPackage ../by-name/sb/sbcl/package.nix { version = "2.4.1"; };
pkg = callPackage ../development/compilers/sbcl { version = "2.4.1"; };
faslExt = "fasl";
flags = [ "--dynamic-space-size" "3000" ];
};