oneDNN was added as a dependency, but it is not actually used by
PyTorch. PyTorch uses oneDNN from the vendored iDeep dependency.
Using a system-provided oneDNN is currently not a supported build
option.
PyTorch 1.6.0 has updated the vendored pthreadpool library, which has recently
added support for Grand Central Dispatch. Unfortunately, it uses functionality
(DISPATCH_APPLY_AUTO) that is only available since macOS 10.13, whereas we are
still using 10.12 libraries.
We can't directly pass through option to vendored libraries, since the setup.py
scripts creates/filters the options that are passed to CMake. So, instead, this
adds a small patch that disables the GCD functionality in pthreadpool.
At some point pytorch.dev was added to expose the libtorch headers and
libraries to non-Python users of libtorch. However, this output
currently has two disadvantages:
1. An application that compiles against the dev output will also have
the libtorch header files in its closure. This is not so nice when
e.g. building Docker images of applications that use libtorch.
2. The dev output has a large transitive closure with many dependencies
that are not necessary when compiling against libtorch.
This change adds the `lib` output so that applications that only link
against libtorch libraries have a small closure.
Before this change, the libtorch dependency adds 746MiB:
% nix path-info -S `realpath result-dev`
/nix/store/10rmy81bjk628sfpbj2szxlws6brq1xn-python3.8-pytorch-1.5.1-dev 782203848
With this change it is reduced to 196MiB:
% nix path-info -S `realpath result-lib`
/nix/store/bck65lf0z7gdhcf89w1zs5nz333lhgwa-python3.8-pytorch-1.5.1-lib 205865056
- Pass `blas.provider` into `buildInputs`, so that CMake can find the actual
`mkl` for inspection of its cmake files and headers.
- Add `USE_MKL` correctly when the blas provider is `mkl`.
- Use the MKLDNN and MKLDNN_CBLAS flags by default, since `mkldnn` is FOSS and
always available..
- Remove a patch for MKL 2019, since we've moved to 2020.
- Add a pythonImportsCheck for "torch" as a basic sanity-check
- Removed some unused variables at the top of the file
Naive concatenation of $LD_LIBRARY_PATH can result in an empty
colon-delimited segment; this tells glibc to load libraries from the
current directory, which is definitely wrong, and may be a security
vulnerability if the current directory is untrusted. (See #67234, for
example.) Fix this throughout the tree.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Otherwise, the wheel gets built with invalid metadata -- causing
'torch >= 1.0.0' to be unsatisfiable in other python packages, for
instance.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
`thd_distributed` is broken just like `distributed` is. `cpp_extensions`
is broken upstream now, it seems? In the future it can hopefully be
re-enabled.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
The `buildPython*` function computes name from `pname` and `version`.
This change removes `name` attribute from all expressions in
`pkgs/development/python-modules`.
While at it, some other minor changes were made as well, such as
replacing `fetchurl` calls with `fetchPypi`.
* pytorch-0.3 with optional cuda and cudnn
* pytorch tests reenabled if compiling without cuda
* pytorch: Conditionalize cudnn dependency on cudaSupport
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: Compile with the same GCC version used by CUDA if cudaSupport
Fixes this error:
In file included from /nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/host_config.h:50:0,
from /nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/cuda_runtime.h:78,
from <command-line>:0:
/nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/crt/host_config.h:121:2: error: #error -- unsupported GNU version! gcc versions later than 6 are not supported!
#error -- unsupported GNU version! gcc versions later than 6 are not supported!
^~~~~
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: Build with joined cudatoolkit
Similar to #30058 for TensorFlow.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: 0.3.0 -> 0.3.1
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: Patch for “refcounted file mapping not supported” failure
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: Skip distributed tests
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch: Use the stub libcuda.so from cudatoolkit for running tests
Signed-off-by: Anders Kaseorg <andersk@mit.edu>