hardware scan was generating a hardware.nix containing
"pkgs.linuxPackages" without having "pkgs" in scope. Also, it
shouldn't define boot.kernelPackages.
svn path=/nixos/trunk/; revision=25192
attribute name of the machine in the model. This allows
networking.hostName and deployment.targetHost to be omitted for
typical networks.
svn path=/nixos/trunk/; revision=25125
- implemented --no-out-link option so that invoking these tools from scripts leave no garbage behind
- some misc. cleanups
svn path=/nixos/trunk/; revision=25019
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
{
test1 = {pkgs, config, ...}:
{
# NixOS config of machine test1
...
};
test2 = {pkgs, config, ...}:
{
# NixOS config of machine test2
...
};
}
And an infrastructure expression, e.g:
{
test1 = {
hostName = "test1.example.org";
system = "i686-linux";
};
test2 = {
hostName = "test2.example.org";
system = "x86_64-linux";
};
}
And by executing:
nixos-deploy-network -n network.nix -i infrastructure.nix
The system configurations in the network expression are built, transferred to the machines in the network and finally activated.
svn path=/nixos/trunk/; revision=24146
devices. These are used to replace hand made listings in the basic
installation CD.
The configuration file, which is generated by nixos-hardware-scan, enables
not-detected devices by default.
svn path=/nixos/trunk/; revision=23911
init script. This removes the need for the `systemConfig' boot
parameter; `init=<stage-2-init>' is enough. However, the GRUB menu
builder still needs to add `systemConfig' to the kernel command line
for compatibility with old configurations.
svn path=/nixos/trunk/; revision=23775
like `build-vm', but boots using the regular boot loader (i.e. GRUB
1 or 2) rather than booting directly from the kernel/initrd. Thus
it allows testing of GRUB.
svn path=/nixos/trunk/; revision=23747
root=... kernel command line parameter, instead of hard-coding it in
`fileSystems'. This is to allow CD-to-USB converters such as
UNetbootin to rewrite the kernel command line to the label or UUID
of the USB stick.
svn path=/nixos/trunk/; revision=23024
we want to generate the GRUB menu without actually installing GRUB
(because Amazon supplies its own pv-grub), and each menu entry
requires "root (hd0)". For the first, allow boot.loader.grub.device
to be set to "nodev" to indicate that the GRUB menu should be
generated without installing GRUB. For the second, add an option
boot.loader.grub.extraPerEntryConfig to allow commands to be added
to each GRUB menu entry (in this case, "root (hd0)").
svn path=/nixos/trunk/; revision=22712