To wait for the docker deamon, curl requests are sent. However, if a
http proxy is set, it will respond instead of the docker daemon.
To avoid this, we send docker ps command instead of curl command.
The image json is not exactly the same as the layer json, therefore I
changed the implementation to use the `baseJson` which doesn’t include
layer specific details like `id`, `size` or the checksum of the layer.
Also the `history` entry was missing in the image json. I’m not totally
sure if this field is required, but a I got an error from a docker
registry when I’ve tried to receive the distribution manifest of an
image without those `history` entry:
GET: `http://<registry-host>/v2/<imageName>/manifests/<imageTag>`
```json
{
"errors": [
{
"code": "MANIFEST_INVALID",
"message": "manifest invalid",
"detail": {}
}
]
}
```
I’ve also used a while loop to iterate over all layers which should make
sure that the order of the layers is correct. Previously `find` was
used and I’m not sure if the order was always correct.
If the base image has been built with nixpkgs.dockerTools, the image
configuration and manifest are readonly so we first need to change
their permissions before removing them.
Fix#27632.
The docker loading (docker 1.12.6) of an image with uppercase in the
name fails with the following message:
invalid reference format: repository name must be lowercase
This patch fixes file modification times to $SOURCE_DATE_EPOCH, and
ensures that files originating from the store are owned by root:root.
Both changes improve reproducibility, and the latter allows proper
building on a host where the store is owned by a non-root user.
When building an image with multiple layers, files
already included in an underlying layer are supposed to
be excluded from the current layer. However, some subtleties
in the way filepaths are compared seem to be blocking this.
Specifically:
* tar generates relative filepaths with directories ending in '/'
* find generates absolute filepaths with no trailing slashes on directories
That is, paths extracted from the underlying tarball look like:
nix/store/.../foobar/
whereas the layer being generated uses paths like:
/nix/store/.../foobar
This patch modifies the output of "tar -t" to match the latter format.
The /nix path in 4d200538 of the layer tar didn't exist for some
packages, such as cacert. This is because cacert just creates an /etc
directory and doesn't depend on any other /nix paths. If we tried
putting this directory in the tar and using overlayfs with it, we'd get
"Invalid argument" when trying to remove the directory.
We now check whether the closure is non-empty before telling tar to
store the /nix directory.
Fixes#14710.
There were two sources of non-determinisim coming into the images. The
first was tar mtimes, the second was pigz/gzip times.
An example image now passes with the --check flag.
I think the intention of this functionality was to provide a simple
alternative to the "runAsRoot" and "contents" attributes.
The implementation caused very slow builds of Docker images. Almost all
of the build time was spent in IO for tar, due to tarballs being
created, immediately extracted, then recreated. I had 30 minute builds
on some of my images which are now down to less than 2 minutes. A couple
of other users on #nix IRC have observed similar improvements.
The implementation also mutated the produced Docker layers without
changing their hashes. Using non-empty tarballs would produce images
which got cached incorrectly in Docker.
I have a commit which just fixes the performance problem but I opted to
completely remove the tarball feature after I found out that it didn't
correctly implement the Docker Image Specification due to the broken
hashing.
* authorization token is optional
* registry url is taken from X-Docker-Endpoints header
* pull.sh correctly resumes partial layer downloads
* detjson.py does not fail on missing keys