Compare commits

...

200 Commits
radius ... main

Author SHA1 Message Date
390bdaaf51 resilio: update to unstable module
All checks were successful
flake / flake (push) Successful in 2m19s
Currently this pins `rslsync`'s group ID using https://github.com/NixOS/nixpkgs/pull/350055
2024-11-09 21:03:56 +00:00
ba9d54ddab chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m7s
2024-11-09 15:20:26 +00:00
843802bcb7 backups: include more git repos
All checks were successful
flake / flake (push) Successful in 1m45s
2024-11-08 12:23:54 +00:00
a07c493802 stinger: update firewall for homeassistant
All checks were successful
flake / flake (push) Successful in 1m47s
2024-11-06 20:12:59 +00:00
3a2d6f4e2e stinger: enable bluetooth
All checks were successful
flake / flake (push) Successful in 1m35s
2024-11-06 10:34:33 +00:00
a383e013c6 homeassistant: microserver.home -> stinger.pop
All checks were successful
flake / flake (push) Successful in 2m0s
2024-11-06 01:36:14 +00:00
ed3b9019f2 homeassistant: backup database
All checks were successful
flake / flake (push) Successful in 1m36s
2024-11-06 01:05:52 +00:00
a3fd10be31 stinger: init host
All checks were successful
flake / flake (push) Successful in 1m36s
2024-11-05 22:10:12 +00:00
79a3c62924 defaults: enable all firmware
All checks were successful
flake / flake (push) Successful in 1m37s
2024-11-05 22:10:01 +00:00
0761162e34 chore(deps): update determinatesystems/nix-installer-action action to v15
All checks were successful
flake / flake (push) Successful in 1m31s
2024-11-04 23:01:03 +00:00
2999a5f744 merlin: init host
All checks were successful
flake / flake (push) Successful in 1m29s
2024-11-04 22:35:55 +00:00
5146d6cd6f lavd: finish cleanup
All checks were successful
flake / flake (push) Successful in 1m27s
2024-11-03 18:36:01 +00:00
3ebba9d7a5 home: enable zoxide
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-31 23:00:45 +00:00
1b5f342aab home-manager: enable ssh-agent
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-31 22:32:57 +00:00
87d311dabe sched_ext: switch to unstable for packages
All checks were successful
flake / flake (push) Successful in 2m3s
2024-10-31 21:59:09 +00:00
0cf7aa1760 tang: remove tywin ip
All checks were successful
flake / flake (push) Successful in 1m24s
Missed this when cleaning up. We should probably get these static IPs from
authoritative DNS like Tailscale IPs, then they wouldn't have been missed. We
can then construct the static IP mappings from this, moving some stuff out of
router/default.nix.
2024-10-29 23:35:20 +00:00
363b8fe3c0 tywin.storage: delete
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-29 23:24:19 +00:00
ca57201ad5 tmux: increase history-limit
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-29 22:54:30 +00:00
32de6b05be tmux: add kernel rev and extended hostname to status-right
All checks were successful
flake / flake (push) Successful in 1m31s
2024-10-29 22:48:12 +00:00
0149d53da2 restic: backup to backblaze
All checks were successful
flake / flake (push) Successful in 1m33s
2024-10-27 21:24:20 +00:00
c7efa1fad4 restic: backup to wasabi
Some checks failed
flake / flake (push) Has been cancelled
2024-10-27 20:09:45 +00:00
dbc2931052 restic: split out common behaviour
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-27 15:57:07 +00:00
817cc3f356 phoenix: temporarily add a password to debug boot issues
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-27 15:37:52 +00:00
c33d5c2edd secrets: re-encrypt with boron user key
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-27 00:43:47 +01:00
187c15b5ab backups: update scripts for new host/path
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-26 23:47:34 +01:00
fc1fb7b528 sapling: set default merge style to :merge3
All checks were successful
flake / flake (push) Successful in 1m31s
2024-10-26 19:26:48 +01:00
caa3128310 home-manager: pass through nixos stateVersion if >24.05
All checks were successful
flake / flake (push) Successful in 1m27s
home-manager currently has a pinned stateVersion on all hosts, even though many
of the hosts were initialised after that point. Create a condition such that
any hosts initialised after 24.05 (the latest currently host) will use that
version in home-manager instead of pinning to 22.11.

Any future users can pass the stateVersion through without the `if`.

Test plan:
```
# system.stateVersion = "23.11";
$ nix eval '.#nixosConfigurations."boron.cx.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
"22.11"
```
```
# system.stateVersion = "24.05";
$ nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
"22.11"
```
```
# system.stateVersion = "24.11"; // no-commit change
$ nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
error:
       ...
       (stack trace truncated; use '--show-trace' to show the full trace)

       error: A definition for option `home-manager.users.root.home.stateVersion' is not of type `one of "18.09", "19.03", "19.09", "20.03", "20.09", "21.03", "21.05", "21.11", "22.05", "22.11", "23.05", "23.11", "24.05"'. Definition values:
       - In `/nix/store/8dhsknmlnv571bg100j9v9yqq1nnh346-source/modules/home/default.nix': "24.11"
```
2024-10-26 19:13:08 +01:00
72e7aead94 sapling: set ui.username
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-26 18:49:14 +01:00
b5489abf98 ssh: allow on all ports for sodium/phoenix
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-26 18:15:31 +01:00
9970dc413d boron: persist ssh key
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-26 15:04:14 +01:00
b3fb80811c chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-26 14:00:16 +00:00
4c7a99bfb7 home-manager: enable neovim
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-26 00:39:35 +01:00
edad7248a5 chore(deps): update actions/checkout action to v4.2.2
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-24 23:00:54 +00:00
ce6c9a25c5 flake: fix unstable path pointing at github instead of gitea
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-24 09:39:20 +01:00
3a0c8effbb flake: use generic branch from forked nixpkgs
All checks were successful
flake / flake (push) Successful in 1m29s
Rather than pulling a specific feature branch into `nixpkgs-unstable`, pull a branch for `nixos-unstable`. Then we can cherry pick and rebase multiple features in the `nixpkgs` fork and keep up more easily with upstream.

The preferred method here would be a list of PRs or URLs to patch into the flake input, but that doesn't seem to be an option.

This change adds in the resilio gid PR, but is otherwise unchanged.
2024-10-23 23:27:53 +01:00
172e6c7415 router: enable ssh on eth0 and add work mbp key
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-23 21:06:24 +01:00
9a18124847 phoenix: enable zswap
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 23:00:04 +01:00
efbf9575f2 phoenix: enable plex
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 22:27:12 +01:00
e03ce4e26c phoenix: enable resilio sync and backups
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-21 20:49:13 +01:00
b18ae44ccb resilio: place storagePath in directoryPath by default
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 08:54:20 +01:00
e80ef10eb7 resilio: calculate default deviceName automatically
Some checks failed
flake / flake (push) Has been cancelled
2024-10-21 08:54:20 +01:00
26beb4116a phoenix: serve restic
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-21 00:39:36 +01:00
1822d07cfe phoenix: enable downloads
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-21 00:20:42 +01:00
a6efbb1b68 phoenix: import practical-defiant-coffee zpool
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 20:07:59 +01:00
6fe4ca5b61 phoenix: mount disk btrfs partitions and add chia
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-20 20:07:59 +01:00
3e8dcd359e secrets: clean up tywin secrets
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-20 20:07:16 +01:00
86bca8ce1c tywin: prepare for zpool export
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-20 19:37:26 +01:00
ee3b420220 backups/git: move tywin->phoenix
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 17:40:20 +01:00
58ce44df6b phoenix: add chia
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 16:29:55 +01:00
f34592926e phoenix: init host
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 16:07:21 +01:00
7dd820685f backup-git: fix systemd timer
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-19 18:30:57 +01:00
4047b0d8b2 router: reserve ips for nanokvms
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-19 16:53:35 +01:00
d7a8562c7d restic: modularise server component
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-19 15:24:32 +01:00
ea163448df homeassistant: enable waze
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-19 00:39:33 +01:00
a8288ec678 scx_layered: get from forked nixpkgs
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-18 13:56:40 +01:00
50a8411ac8 nixos: add nixpkgs-unstable to flake registry
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-13 00:33:57 +01:00
6f5b9430c9 prometheus: add alert for resilio sync going down
All checks were successful
flake / flake (push) Successful in 1m17s
2024-10-12 21:39:00 +01:00
33cdcdca0a prometheus: enable systemd collector
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-12 15:27:13 +01:00
c42a4e5297 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m43s
2024-10-12 13:37:53 +00:00
2656c0dba9 scx_lavd: package and ship
All checks were successful
flake / flake (push) Successful in 1m18s
2024-10-12 00:54:02 +01:00
961acd80d7 scx_layered: package and ship
All checks were successful
flake / flake (push) Successful in 1m14s
2024-10-11 20:15:55 +01:00
eb07e4c4fd chore(deps): update actions/checkout action to v4.2.1
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-07 23:00:25 +00:00
4eaae0fa75 isponsorblocktv: deploy docker container
All checks were successful
flake / flake (push) Successful in 1m18s
2024-10-06 21:38:06 +01:00
72955e2377 homeassistant: announce locally and deploy to hallway tablet
All checks were successful
flake / flake (push) Successful in 1m17s
2024-10-06 20:43:48 +01:00
0a2330cb90 www: fix cloning script
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-06 16:35:59 +01:00
3d8a60da5b sched_ext: bump kernel to 6.12-rc1
All checks were successful
flake / flake (push) Successful in 1m13s
Removes the custom kernel features and requires any host running
sched_ext to pull a kernel at least 6.12. Looks at
pkgs.unstable.linuxPackages first, if that's too old it falls back to
pkgs.linuxPackages_latest, and if that's too old it goes for
pkgs.unstable.linuxPackages_testing.

The plan is to leave `boot.kernelPackages` alone if new enough, but
we'll keep the assertion. Some schedulers might require more specific
kernel constraints in the future.
2024-10-03 00:17:59 +01:00
c0e331bf80 boron: enable resilio sync
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-28 15:01:30 +01:00
9c419376c5 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m19s
2024-09-28 12:31:15 +00:00
4332fee3ce chore(deps): update actions/checkout action to v4.2.0
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-26 23:00:18 +00:00
ceb8591705 step-ca: pin uid and gid
All checks were successful
flake / flake (push) Successful in 1m14s
2024-09-23 20:30:35 +01:00
415a061842 prometheus: move id pinning to correct module
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-23 20:26:34 +01:00
31a9828430 prometheus: add service and enable reporting globally (#330)
All checks were successful
flake / flake (push) Successful in 1m15s
## Test plan:

- https://prometheus.ts.hillion.co.uk/graph?g0.expr=1%20-%20(node_filesystem_avail_bytes%7Bmountpoint%20%3D%20%22%2F%22%2C%20device%3D%22tmpfs%22%7D%20%2F%20node_filesystem_size_bytes%7Bmountpoint%20%3D%20%22%2F%22%2C%20device%3D%22tmpfs%22%7D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=1h - reports percentage used on all tmpfs roots. This is exactly what I wanted, in the future I might add alerts for it as high tmpfs usage is a sign of something being wrong and is likely to lead to OOMing.

Aside: NixOS is awesome. I just deployed full monitoring to every host I have and all future hosts in minutes.
Reviewed-on: #330
Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Co-committed-by: Jake Hillion <jake@hillion.co.uk>
2024-09-23 20:24:31 +01:00
7afa21e537 chia: update to 2.4.3
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-22 21:09:31 +01:00
739e1f6ab3 home: move tailscale exit node from microserver to router (#328)
All checks were successful
flake / flake (push) Successful in 1m15s
## Test plan:

- Connected MacBook to iPhone hotspot (off network).
- With Tailscale connected can ping/ssh to microserver.home on both LANs (main and IoT).
- With exit node enabled traceroute shows router's tailscale IP as a hop.
- With exit node enabled ipinfo.io shows my home IP.
- With exit node disabled ipinfo.io shows an EE IP.

iPhone exit node is still playing up, it shows no Internet connection. This behaviour was identical with the Pi setup that this replaces, maybe an iOS 18 bug for Tailscale? Treating this as not a regression.
Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Co-committed-by: Jake Hillion <jake@hillion.co.uk>
2024-09-22 21:04:53 +01:00
8933d38d36 sched_ext: ship pre-release 6.12 kernel
All checks were successful
flake / flake (push) Successful in 1m14s
2024-09-22 16:18:04 +01:00
0ad31dddae gendry: decrypt encrypted disk with clevis/tang
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-22 11:06:03 +01:00
d5c2f8d543 router: setup cameras vlan
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-17 09:20:27 +01:00
1189a41df9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m43s
2024-09-15 16:01:08 +00:00
39730d2ec3 macbook: add shell utilities
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-14 02:39:26 +01:00
ac6f285400 resilio: require mounts be available
All checks were successful
flake / flake (push) Successful in 1m15s
Without this resilio fails on boot on tywin.storage where the paths are
on a ZFS array which gets mounted reliably later than the resilio
service attempts to start.
2024-09-14 02:30:20 +01:00
e4b8fd7438 chore(deps): update determinatesystems/nix-installer-action action to v14
All checks were successful
flake / flake (push) Successful in 1m27s
2024-09-10 00:00:51 +00:00
24be3394bc chore(deps): update determinatesystems/magic-nix-cache-action action to v8
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-09 23:00:50 +00:00
ba053c539c boron: enable podman
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-06 19:04:25 +01:00
3aeeb69c2b nix-darwin: add macbook
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-05 00:50:02 +01:00
85246af424 caddy: update to unstable
All checks were successful
flake / flake (push) Successful in 1m13s
The default config for automatic ACME no longer works in Caddy <2.8.0.
This is due to changes with ZeroSSL's auth. Update to unstable Caddy
which is new enough to renew certs again.

Context: https://github.com/caddyserver/caddy/releases/tag/v2.8.0

Add `pkgs.unstable` as an overlay as recommended on the NixOS wiki. This
is needed here as Caddy must be runnable on all architectures.
2024-09-05 00:04:08 +01:00
ba7a39b66e chore(deps): pin dependencies
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-02 23:00:12 +00:00
df31ebebf8 boron: bump tmpfs to 100% of RAM
All checks were successful
flake / flake (push) Successful in 1m18s
2024-08-31 22:04:38 +01:00
2f3a33ad8e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 19:01:46 +00:00
343b34b4dc boron: support sched_ext in kernel
All checks were successful
flake / flake (push) Successful in 1m45s
2024-08-30 18:52:31 +01:00
264799952e bathroom_light: trust switchbot if more recently updated
All checks were successful
flake / flake (push) Successful in 1m13s
2024-08-30 18:46:38 +01:00
5cef32cf1e gitea actions: use cache for nix
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 18:39:02 +01:00
6cc70e117d tywin: mount d7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-22 15:17:11 +01:00
a52aed5778 gendry: use zram swap
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-18 13:51:28 +01:00
70b53b5c01 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-18 10:35:15 +00:00
3d642e2320 boron: move postgresqlBackup to disk to reduce ram pressure
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-09 23:37:16 +01:00
41d5f0cc53 homeassistant: add sonos
All checks were successful
flake / flake (push) Successful in 1m17s
2024-08-08 18:31:10 +01:00
974c947130 homeassistant: add smartthings
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 18:15:34 +01:00
8a9498f8d7 homeassistant: expose sleep_mode to google
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 17:56:32 +01:00
2ecdafe1cf chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m32s
2024-08-03 12:15:23 +01:00
db5dc5aee6 step-ca: enable server on sodium and load root certs
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-01 23:28:22 +01:00
f96f03ba0c boron: update to Linux 6.10
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-27 15:16:59 +01:00
e81cad1670 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m25s
2024-07-26 13:40:49 +00:00
67c8e3dcaf homeassistant: migrate to basnijholt/adaptive-lighting
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-22 11:16:34 +01:00
1052379119 unifi: switch to nixos module
All checks were successful
flake / flake (push) Successful in 1m24s
2024-07-19 16:43:53 +01:00
0edb8394c8 tywin: mount d6
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 22:19:41 +01:00
bbab551b0f be.lt: connect to Hillion WPA3 Network
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 17:10:08 +01:00
13c937b196 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 14:14:27 +00:00
6bdaca40e0 tmux: index from 0 and always allow attach
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 15:02:19 +01:00
462f0eecf4 gendry: allow luks discards
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 09:33:33 +01:00
5dcf3b8e3f chia: update to 2.4.1
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-10 10:01:16 +01:00
b0618cd3dc chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-07-06 22:31:08 +00:00
a9829eea9e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m30s
2024-06-23 10:18:56 +00:00
cfd64e9a73 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-06-16 12:13:03 +00:00
b3af1739a8 chore(deps): update actions/checkout action to v4.1.7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-06-13 23:01:06 +00:00
cde6bdd498 tywin: enable clevis/tang for boot
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-10 22:34:28 +01:00
bd5efa3648 tywin: encrypt root disk
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-09 23:14:44 +01:00
30679f9f4b sodium: add cache directory on the sd card
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 22:41:49 +01:00
67644162e1 sodium: rekey
All checks were successful
flake / flake (push) Successful in 1m12s
accidentally ran `rm -r /data`...
2024-06-02 21:45:03 +01:00
81c77de5ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 13:15:18 +00:00
a0f93c73d0 sodium.pop: add rpi5 host
All checks were successful
flake / flake (push) Successful in 1m22s
2024-05-25 22:56:27 +01:00
78705d440a homeassistant: only switch bathroom light when it is already on
All checks were successful
flake / flake (push) Successful in 1m18s
Although the system now knows whether the bathroom light is on, it switches the switch every time the light should be turned off regardless of if it's already off. Because this is a device running on battery that performs a physical movement this runs the battery out very fast. Adjust the system to only switch the light off if it thinks it's on, even though this has the potential for desyncs.
2024-05-25 22:03:11 +01:00
3f829236a2 homeassistant: read bathroom light status from motion sensor
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-25 17:03:57 +01:00
7b221eda07 theon: stop scripting networking
All checks were successful
flake / flake (push) Successful in 1m19s
Unsure why this host is using systemd-networkd, but leave that unchanged
and have NixOS know about it to prevent a warning about loss of
connectivity on build.
2024-05-25 16:40:19 +01:00
22305815c6 matrix: fix warning about renamed sliding sync
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-25 16:33:05 +01:00
fa493123fc router: dhcp: add APC vendor specific cookie
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 22:14:16 +01:00
62e61bec8a matrix: add sliding sync
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 10:18:30 +01:00
50d70ed8bc boron: update kernel to 6.9
All checks were successful
flake / flake (push) Successful in 1m19s
2024-05-23 22:41:18 +01:00
796bbc7a68 chore(deps): update nixpkgs to nixos-24.05 (#271)
All checks were successful
flake / flake (push) Successful in 1m20s
This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [nixpkgs](https://github.com/NixOS/nixpkgs) | major | `nixos-23.11` -> `nixos-24.05` |

---

### Release Notes

<details>
<summary>NixOS/nixpkgs (nixpkgs)</summary>

### [`vnixos-24.05`](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

[Compare Source](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNzQuMyIsInVwZGF0ZWRJblZlciI6IjM3LjM3NC4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Reviewed-on: #271
Co-authored-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
Co-committed-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
2024-05-23 22:40:58 +01:00
8123653a92 jorah.cx: delete
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-21 22:43:56 +01:00
55ade830a8 router: add more dhcp reservations
All checks were successful
flake / flake (push) Successful in 1m12s
2024-05-21 20:09:26 +01:00
a9c9600b14 matrix: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m20s
2024-05-18 19:14:39 +01:00
eae5e105ff unifi: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m21s
2024-05-18 16:52:22 +01:00
f1fd6ee270 gitea: fix ips in iptables rules
All checks were successful
flake / flake (push) Successful in 1m10s
2024-05-18 15:34:43 +01:00
1dc370709a chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m23s
2024-05-18 13:03:00 +00:00
de905e23a8 chore(deps): update actions/checkout action to v4.1.6
All checks were successful
flake / flake (push) Successful in 2m39s
2024-05-17 00:01:14 +00:00
9247ae5d91 chore(deps): update cachix/install-nix-action action to v27
All checks were successful
flake / flake (push) Successful in 2m33s
2024-05-16 23:01:13 +00:00
7298955391 tywin: enable automatic btrfs scrubbing
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-15 20:53:57 +01:00
f59824ad62 gitea: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-12 13:11:54 +01:00
bff93529aa www.global: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-12 12:11:15 +01:00
13bfe6f787 boron: enable authoritative dns
All checks were successful
flake / flake (push) Successful in 2m4s
2024-05-10 22:44:48 +01:00
ad8c8b9b19 boron: enable version_tracker
All checks were successful
flake / flake (push) Successful in 2m5s
2024-05-10 22:12:49 +01:00
b7c07d0107 boron: enable gitea actions
All checks were successful
flake / flake (push) Successful in 2m28s
2024-05-10 21:52:48 +01:00
9cc389f865 boron: remove folding@home
All checks were successful
flake / flake (push) Successful in 1m58s
The update from 23.11 to 24.05 brought a new folding@home version that
doesn't work. It doesn't work by default and there is 0 documentation on
writing xml configs manually, and even the web UI redirects to a
nonsense public website now. Unfortunately this means I'm going to let
this box idle rather than doing useful work, for now at least.
2024-05-10 21:02:16 +01:00
2153c22d7f chore(deps): update actions/checkout action to v4.1.5
All checks were successful
flake / flake (push) Successful in 2m1s
2024-05-09 23:00:45 +00:00
a4235b2581 boron: move to kernel 6.8 and re-image
All checks were successful
flake / flake (push) Successful in 1m58s
The extremely modern hardware on this server appears to experience
kernel crashes with the default NixOS 23.11 kernel 6.1 and the default
NixOS 24.05 kernel 6.6. Empirical testing shows the server staying up on
Ubuntu 22's 6.2 and explicit NixOS kernel 6.8.

The server was wiped during this testing so now needs reimaging.
2024-05-08 21:11:09 +01:00
36ce6ca185 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m23s
2024-05-08 16:39:43 +00:00
e3887e320e tywin: add zram swap
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-06 23:25:08 +01:00
a272cd0661 downloads: add explicit nameservers
All checks were successful
flake / flake (push) Successful in 1m48s
2024-05-06 00:07:25 +01:00
1ca4daab9c locations: move attrset into config block
All checks were successful
flake / flake (push) Successful in 1m42s
2024-04-28 10:39:40 +01:00
745ea58dec homeassistant: update trusted proxies
All checks were successful
flake / flake (push) Successful in 1m46s
2024-04-27 19:14:12 +01:00
348bca745b jorah: add authoritative dns server
All checks were successful
flake / flake (push) Successful in 1m44s
2024-04-27 18:54:46 +01:00
0ef24c14e7 tailscale: update to included nixos module
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-27 15:36:45 +01:00
d9233021c7 add enable options for modules/common/default
All checks were successful
flake / flake (push) Successful in 2m9s
2024-04-27 13:46:06 +01:00
b39549e1a9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m11s
2024-04-27 11:00:24 +00:00
8fdd915e76 router.home: enable unbound dns server
All checks were successful
flake / flake (push) Successful in 2m0s
2024-04-26 21:40:17 +01:00
62d62500ae renovate: fix gitea actions schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m59s
Also pin patch version of Gitea actions. This should treat each future
update as an update rather than a relock which should show the changelog
in the PR.
2024-04-25 19:40:00 +01:00
b012d48e1d chore(deps): update actions/checkout digest to 0ad4b8f
All checks were successful
flake / flake (push) Successful in 1m55s
2024-04-25 15:01:12 +00:00
eba1dae06b chia: update to 2.2.1
All checks were successful
flake / flake (push) Successful in 1m59s
2024-04-24 23:36:51 +01:00
b6ef41cae0 renovate: auto-merge github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m49s
2024-04-23 22:24:28 +01:00
700ca88feb chore(deps): pin cachix/install-nix-action action to 8887e59
All checks were successful
flake / flake (push) Successful in 1m52s
2024-04-23 19:58:42 +00:00
1c75fa88a7 boron.cx: add new dedicated server
All checks were successful
flake / flake (push) Successful in 1m49s
2024-04-23 20:45:44 +01:00
c3447b3ec9 renovate: extend recommended config and pin github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m37s
2024-04-23 20:27:40 +01:00
5350581676 gitea.actions: actually check formatting
All checks were successful
flake / flake (push) Successful in 1m38s
2024-04-23 20:23:54 +01:00
4d1521e4b4 be.lt: add beryllium laptop
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-21 17:15:14 +01:00
88b33598d7 microserver.parents -> li.pop
All checks were successful
flake / flake (push) Successful in 1m30s
2024-04-20 13:45:00 +01:00
4a09f50889 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m35s
2024-04-19 17:02:52 +00:00
cf76a055e7 renovate: always rebase and update schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m30s
2024-04-17 23:22:29 +01:00
f4f6c66098 jorah: enable zramSwap
All checks were successful
flake / flake (push) Successful in 1m29s
2024-04-17 23:04:04 +01:00
2c432ce986 jorah: start folding@home
All checks were successful
flake / flake (push) Successful in 1m9s
2024-04-17 22:55:52 +01:00
d6b15a1f25 known_hosts: add jorah and theon
All checks were successful
flake / flake (push) Successful in 1m10s
2024-04-14 14:35:38 +01:00
bd34d0e3ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m34s
2024-04-14 11:02:06 +00:00
52caf6edf9 gitea.actions: nixify basic docker runner
All checks were successful
flake / flake (push) Successful in 1m37s
2024-04-14 00:09:28 +01:00
016d0e61b5 www: proxy some domains via cloudflare
All checks were successful
flake / flake (push) Successful in 3m38s
2024-04-13 23:02:22 +01:00
b4a33bb6b2 jorah: fix dual networking setup
All checks were successful
flake / flake (push) Successful in 3m35s
2024-04-13 16:45:20 +01:00
8cee990f54 cleanup references to server stranger
All checks were successful
flake / flake (push) Successful in 3m34s
2024-04-10 21:38:08 +01:00
59e5717e00 vm.strangervm: delete
All checks were successful
flake / flake (push) Successful in 3m39s
2024-04-07 21:07:02 +01:00
f2fe064f72 mastodon: stop running
All checks were successful
flake / flake (push) Successful in 4m57s
2024-04-07 19:08:35 +01:00
dd76435ec3 drone: remove server
All checks were successful
flake / flake (push) Successful in 5m9s
2024-04-07 19:00:12 +01:00
85e5c9d00e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 5m8s
2024-04-07 16:12:32 +00:00
ca1751533c update_script: escape quote
All checks were successful
flake / flake (push) Successful in 5m8s
2024-04-07 17:01:56 +01:00
8900423527 flake: deduplicate home-manager
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 16:37:43 +01:00
1ae3cf0c41 ci: ignore tags
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 16:28:34 +01:00
72898701da update_script: remove accidental subshell from echo
All checks were successful
flake / flake (push) Successful in 5m9s
2024-04-07 16:13:13 +01:00
3c693ee42f convert drone workflow to gitea actions
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 15:07:40 +01:00
9752e63f09 all: add easy update script
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-07 14:11:58 +01:00
d4fb381fcf microserver.parents: enable zram
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-05 11:09:48 +01:00
e812e96afc chore(deps): lock file maintenance
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-04 21:19:23 +00:00
d3fb88a328 gitea: update settings
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 20:28:46 +01:00
804aa10048 renovate configuration
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 20:22:10 +01:00
8f493d1335 move renovate.json to default place
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-01 20:03:52 +01:00
9674e6651d renovate: initial config
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 19:34:53 +01:00
88378c3179 deluge: update config options
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-28 22:30:26 +00:00
d682d5434b add links. pointing to matrix
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-27 22:37:31 +00:00
790d0a8a6b homeassistant: add switchbot component
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-18 21:26:34 +00:00
78a024a924 add homeassistant
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 19:46:22 +00:00
5e725b14bb impermenance: fix zsh history file
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 15:00:34 +00:00
7f25cab5f8 www: cleanup emby
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 13:44:07 +00:00
b0e4e2cca1 flake: update 16th March 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 13:43:22 +00:00
80b4305e60 scripts: add update_nixpkgs
All checks were successful
continuous-integration/drone/push Build is passing
add simple script to update nix shell/run nixpkgs to match flake
nixpkgs-unstable input. this avoids having it set to the default of
nixpkgs-unstable and downloading 30MB tarballs all the time.
2024-03-01 14:44:25 +00:00
90cbec88db flake: update 28th February 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-02-28 13:40:55 +00:00
176 changed files with 4610 additions and 1585 deletions

View File

@ -1,27 +0,0 @@
---
kind: pipeline
type: docker
name: check
steps:
- name: lint
image: nixos/nix:2.20.1
commands:
- nix --extra-experimental-features 'nix-command flakes' fmt
- git diff --exit-code
- name: check
image: nixos/nix:2.20.1
commands:
- nix --extra-experimental-features 'nix-command flakes' flake check
trigger:
event:
exclude:
- tag
- pull_request
---
kind: signature
hmac: 5af72ec77460d7d914f9177c78febed763ea1a33dc0f0e39e7599bbf8f4ad987
...

View File

@ -0,0 +1,23 @@
name: flake
on:
push:
branches:
- '**'
tags-ignore:
- '**'
jobs:
flake:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: DeterminateSystems/nix-installer-action@b92f66560d6f97d6576405a7bae901ab57e72b6a # v15
- uses: DeterminateSystems/magic-nix-cache-action@87b14cf437d03d37989d87f0fa5ce4f5dc1a330b # v8
- name: lint
run: |
nix fmt
git diff --exit-code
- name: flake check
run: nix flake check --all-systems
timeout-minutes: 10

View File

@ -10,3 +10,4 @@ Raspberry Pi images that support Tailscale and headless SSH can be built using a
nixos-generate -f sd-aarch64-installer --system aarch64-linux -c hosts/microserver.home.ts.hillion.co.uk/default.nix
cp SOME_OUTPUT out.img.zst
Alternatively, a Raspberry Pi image with headless SSH can be easily built using the logic in [this repo](https://github.com/Robertof/nixos-docker-sd-image-builder/tree/master).

View File

@ -0,0 +1,27 @@
{ config, pkgs, ... }:
{
config = {
system.stateVersion = 4;
networking.hostName = "jakehillion-mba-m2-15";
nix = {
useDaemon = true;
};
programs.zsh.enable = true;
security.pam.enableSudoTouchIdAuth = true;
environment.systemPackages = with pkgs; [
fd
htop
mosh
neovim
nix
ripgrep
sapling
];
};
}

View File

@ -2,19 +2,23 @@
"nodes": {
"agenix": {
"inputs": {
"darwin": "darwin",
"home-manager": "home-manager",
"darwin": [
"darwin"
],
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
],
"systems": "systems"
},
"locked": {
"lastModified": 1703433843,
"narHash": "sha256-nmtA4KqFboWxxoOAA6Y1okHbZh+HsXaMPFkYHsoDRDw=",
"lastModified": 1723293904,
"narHash": "sha256-b+uqzj+Wa6xgMS9aNbX4I+sXeb5biPDi39VgvSFqFvU=",
"owner": "ryantm",
"repo": "agenix",
"rev": "417caa847f9383e111d1397039c9d4337d024bf0",
"rev": "f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41",
"type": "github"
},
"original": {
@ -26,35 +30,53 @@
"darwin": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1700795494,
"narHash": "sha256-gzGLZSiOhf155FW7262kdHo2YDeugp3VuIFb4/GGng0=",
"lastModified": 1731153869,
"narHash": "sha256-3Ftf9oqOypcEyyrWJ0baVkRpvQqroK/SVBFLvU3nPuc=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d",
"rev": "5c74ab862c8070cbf6400128a1b56abb213656da",
"type": "github"
},
"original": {
"owner": "lnl7",
"ref": "master",
"repo": "nix-darwin",
"type": "github"
}
},
"disko": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1731060864,
"narHash": "sha256-aYE7oAYZ+gPU1mPNhM0JwLAQNgjf0/JK1BF1ln2KBgk=",
"owner": "nix-community",
"repo": "disko",
"rev": "5e40e02978e3bd63c2a6a9fa6fa8ba0e310e747f",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1705309234,
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github"
},
"original": {
@ -66,52 +88,51 @@
"home-manager": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703113217,
"narHash": "sha256-7ulcXOk63TIT2lVDSExj7XzFx09LpdSAPtvgtM7yQPE=",
"lastModified": 1726989464,
"narHash": "sha256-Vl+WVTJwutXkimwGprnEtXc/s/s8sMuXzqXaspIGlwM=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "3bfaacf46133c037bb356193bd2f1765d9dc82c1",
"rev": "2f23fa308a7c067e52dfcc30a0758f47043ec176",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-24.05",
"repo": "home-manager",
"type": "github"
}
},
"home-manager_2": {
"home-manager-unstable": {
"inputs": {
"nixpkgs": [
"nixpkgs"
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1706981411,
"narHash": "sha256-cLbLPTL1CDmETVh4p0nQtvoF+FSEjsnJTFpTxhXywhQ=",
"lastModified": 1730837930,
"narHash": "sha256-0kZL4m+bKBJUBQse0HanewWO0g8hDdCvBhudzxgehqc=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "652fda4ca6dafeb090943422c34ae9145787af37",
"rev": "2f607e07f3ac7e53541120536708e824acccfaa8",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-23.11",
"repo": "home-manager",
"type": "github"
}
},
"impermanence": {
"locked": {
"lastModified": 1706639736,
"narHash": "sha256-CaG4j9+UwBDfinxxvJMo6yOonSmSo0ZgnbD7aj2Put0=",
"lastModified": 1730403150,
"narHash": "sha256-W1FH5aJ/GpRCOA7DXT/sJHFpa5r8sq2qAUncWwRZ3Gg=",
"owner": "nix-community",
"repo": "impermanence",
"rev": "cd13c2917eaa68e4c49fea0ff9cada45440d7045",
"rev": "0d09341beeaa2367bac5d718df1404bf2ce45e6f",
"type": "github"
},
"original": {
@ -121,44 +142,60 @@
"type": "github"
}
},
"nixpkgs": {
"nixos-hardware": {
"locked": {
"lastModified": 1707347730,
"narHash": "sha256-0etC/exQIaqC9vliKhc3eZE2Mm2wgLa0tj93ZF/egvM=",
"lastModified": 1730919458,
"narHash": "sha256-yMO0T0QJlmT/x4HEyvrCyigGrdYfIXX3e5gWqB64wLg=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "6832d0d99649db3d65a0e15fa51471537b2c56a6",
"repo": "nixos-hardware",
"rev": "e1cc1f6483393634aee94514186d21a4871e78d7",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-23.11",
"repo": "nixos-hardware",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1730963269,
"narHash": "sha256-rz30HrFYCHiWEBCKHMffHbMdWJ35hEkcRVU0h7ms3x0=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "83fb6c028368e465cd19bb127b86f971a5e41ebc",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1707268954,
"narHash": "sha256-2en1kvde3cJVc3ZnTy8QeD2oKcseLFjYPLKhIGDanQ0=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "f8e2ebd66d097614d51a56a755450d4ae1632df1",
"type": "github"
"lastModified": 1730867498,
"narHash": "sha256-Ce3a1w7Qf+UEPjVJcXxeSiWyPMngqf1M2EIsmqiluQw=",
"rev": "9240e11a83307a6e8cf2254340782cba4aa782fd",
"type": "tarball",
"url": "https://gitea.hillion.co.uk/api/v1/repos/JakeHillion/nixpkgs/archive/9240e11a83307a6e8cf2254340782cba4aa782fd.tar.gz"
},
"original": {
"owner": "nixos",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
"type": "tarball",
"url": "https://gitea.hillion.co.uk/JakeHillion/nixpkgs/archive/nixos-unstable.tar.gz"
}
},
"root": {
"inputs": {
"agenix": "agenix",
"darwin": "darwin",
"disko": "disko",
"flake-utils": "flake-utils",
"home-manager": "home-manager_2",
"home-manager": "home-manager",
"home-manager-unstable": "home-manager-unstable",
"impermanence": "impermanence",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable"
}

131
flake.nix
View File

@ -1,60 +1,109 @@
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-23.11";
nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable";
nixpkgs.url = "github:nixos/nixpkgs/nixos-24.05";
nixpkgs-unstable.url = "https://gitea.hillion.co.uk/JakeHillion/nixpkgs/archive/nixos-unstable.tar.gz";
nixos-hardware.url = "github:nixos/nixos-hardware";
flake-utils.url = "github:numtide/flake-utils";
darwin.url = "github:lnl7/nix-darwin";
darwin.inputs.nixpkgs.follows = "nixpkgs";
agenix.url = "github:ryantm/agenix";
agenix.inputs.nixpkgs.follows = "nixpkgs";
agenix.inputs.darwin.follows = "darwin";
agenix.inputs.home-manager.follows = "home-manager";
home-manager.url = "github:nix-community/home-manager/release-23.11";
home-manager.url = "github:nix-community/home-manager/release-24.05";
home-manager.inputs.nixpkgs.follows = "nixpkgs";
home-manager-unstable.url = "github:nix-community/home-manager";
home-manager-unstable.inputs.nixpkgs.follows = "nixpkgs-unstable";
impermanence.url = "github:nix-community/impermanence/master";
disko.url = "github:nix-community/disko";
disko.inputs.nixpkgs.follows = "nixpkgs";
};
description = "Hillion Nix flake";
outputs = { self, nixpkgs, nixpkgs-unstable, flake-utils, agenix, home-manager, impermanence, ... }@inputs: {
nixosConfigurations =
let
fqdns = builtins.attrNames (builtins.readDir ./hosts);
getSystemOverlays = system: nixpkgsConfig: [
(final: prev: {
"storj" = final.callPackage ./pkgs/storj.nix { };
})
];
mkHost = fqdn:
let system = builtins.readFile ./hosts/${fqdn}/system;
in
nixpkgs.lib.nixosSystem {
inherit system;
specialArgs = inputs;
modules = [
./hosts/${fqdn}/default.nix
./modules/default.nix
outputs =
{ self
, agenix
, darwin
, disko
, flake-utils
, home-manager
, home-manager-unstable
, impermanence
, nixos-hardware
, nixpkgs
, nixpkgs-unstable
, ...
}@inputs:
let
getSystemOverlays = system: nixpkgsConfig: [
(final: prev: {
unstable = nixpkgs-unstable.legacyPackages.${prev.system};
agenix.nixosModules.default
impermanence.nixosModules.impermanence
"storj" = final.callPackage ./pkgs/storj.nix { };
})
];
in
{
nixosConfigurations =
let
fqdns = builtins.attrNames (builtins.readDir ./hosts);
mkHost = fqdn:
let
system = builtins.readFile ./hosts/${fqdn}/system;
func = if builtins.pathExists ./hosts/${fqdn}/unstable then nixpkgs-unstable.lib.nixosSystem else nixpkgs.lib.nixosSystem;
home-manager-pick = if builtins.pathExists ./hosts/${fqdn}/unstable then home-manager-unstable else home-manager;
in
func {
inherit system;
specialArgs = inputs;
modules = [
./hosts/${fqdn}/default.nix
./modules/default.nix
home-manager.nixosModules.default
{
home-manager.sharedModules = [
impermanence.nixosModules.home-manager.impermanence
];
}
agenix.nixosModules.default
impermanence.nixosModules.impermanence
disko.nixosModules.disko
({ config, ... }: {
nix.registry.nixpkgs.flake = nixpkgs; # pin `nix shell` nixpkgs
system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev;
nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config;
})
];
};
in
nixpkgs.lib.genAttrs fqdns mkHost;
} // flake-utils.lib.eachDefaultSystem (system: {
formatter = nixpkgs.legacyPackages.${system}.nixpkgs-fmt;
});
home-manager-pick.nixosModules.default
{
home-manager.sharedModules = [
impermanence.nixosModules.home-manager.impermanence
];
}
({ config, ... }: {
system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev;
nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config;
})
];
};
in
nixpkgs.lib.genAttrs fqdns mkHost;
darwinConfigurations = {
jakehillion-mba-m2-15 = darwin.lib.darwinSystem {
system = "aarch64-darwin";
specialArgs = inputs;
modules = [
./darwin/jakehillion-mba-m2-15/configuration.nix
({ config, ... }: {
nixpkgs.overlays = getSystemOverlays "aarch64-darwin" config.nixpkgs.config;
})
];
};
};
} // flake-utils.lib.eachDefaultSystem (system: {
formatter = nixpkgs.legacyPackages.${system}.nixpkgs-fmt;
});
}

View File

@ -0,0 +1,55 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "be";
networking.domain = "lt.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Impermanence
custom.impermanence = {
enable = true;
userExtraFiles.jake = [
".ssh/id_ecdsa_sk_keys"
];
};
## WiFi
age.secrets."wifi/be.lt.ts.hillion.co.uk".file = ../../secrets/wifi/be.lt.ts.hillion.co.uk.age;
networking.wireless = {
enable = true;
environmentFile = config.age.secrets."wifi/be.lt.ts.hillion.co.uk".path;
networks = {
"Hillion WPA3 Network".psk = "@HILLION_WPA3_NETWORK_PSK@";
};
};
## Desktop
custom.users.jake.password = true;
custom.desktop.awesome.enable = true;
## Tailscale
age.secrets."tailscale/be.lt.ts.hillion.co.uk".file = ../../secrets/tailscale/be.lt.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/be.lt.ts.hillion.co.uk".path;
};
security.sudo.wheelNeedsPassword = lib.mkForce true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
};
}

View File

@ -0,0 +1,59 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/D184-A79B";
fsType = "vfat";
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/c8ffa91a-5152-4d84-8995-01232fd5acd6";
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=data" ];
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp0s20f0u1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp1s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
powerManagement.cpuFreqGovernor = lib.mkDefault "powersave";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1,7 @@
# boron.cx.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/boron.cx.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,13 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://80.229.251.26:7654"
},
{
"url": "http://185.240.111.53:7654"
}
]
}
}

View File

@ -0,0 +1,181 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "boron";
networking.domain = "cx.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [ "ip=dhcp" ];
boot.initrd = {
availableKernelModules = [ "igb" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
"disk1-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
## Kernel
### Explicitly use the latest kernel at time of writing because the LTS
### kernels available in NixOS do not seem to support this server's very
### modern hardware.
### custom.sched_ext.enable implies >=6.12, if this is removed the kernel may need to be pinned again. >=6.10 seems good.
custom.sched_ext.enable = true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
userExtraFiles.jake = [
".ssh/id_ecdsa"
".ssh/id_rsa"
];
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/system
btrfs subvolume snapshot /cache/empty_snapshot /cache/system
'';
## Custom Services
custom = {
locations.autoServe = true;
www.global.enable = true;
services = {
gitea.actions = {
enable = true;
tokenSecret = ../../secrets/gitea/actions/boron.age;
};
};
};
services.nsd.interfaces = [
"138.201.252.214"
"2a01:4f8:173:23d2::2"
];
## Enable ZRAM to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Filesystems
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# By default both /data and /nix would be scrubbed. They are the same filesystem so this is wasteful.
fileSystems = [ "/data" ];
};
## Resilio
custom.resilio = {
enable = true;
folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
};
services.resilio.directoryRoot = "/data/sync";
## General usability
### Make podman available for dev tools such as act
virtualisation = {
containers.enable = true;
podman = {
enable = true;
dockerCompat = true;
dockerSocket.enable = true;
};
};
users.users.jake.extraGroups = [ "podman" ];
## Networking
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
"net.ipv6.conf.all.forwarding" = true;
};
networking = {
useDHCP = false;
interfaces = {
enp6s0 = {
name = "eth0";
useDHCP = true;
ipv6.addresses = [{
address = "2a01:4f8:173:23d2::2";
prefixLength = 64;
}];
};
};
defaultGateway6 = {
address = "fe80::1";
interface = "eth0";
};
};
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
22 # SSH
3022 # SSH (Gitea) - redirected to 22
53 # DNS
80 # HTTP 1-2
443 # HTTPS 1-2
8080 # Unifi (inform)
];
allowedUDPPorts = lib.mkForce [
53 # DNS
443 # HTTP 3
3478 # Unifi STUN
];
};
};
};
## Tailscale
age.secrets."tailscale/boron.cx.ts.hillion.co.uk".file = ../../secrets/tailscale/boron.cx.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/boron.cx.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,72 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" "size=100%" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/ED9C-4ABC";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=data" ];
};
fileSystems."/cache" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=cache" ];
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."disk0-crypt" = {
device = "/dev/disk/by-uuid/a68ead16-1bdc-4d26-9e55-62c2be11ceee";
allowDiscards = true;
};
boot.initrd.luks.devices."disk1-crypt" = {
device = "/dev/disk/by-uuid/19bde205-bee4-430d-a4c1-52d635a23963";
allowDiscards = true;
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1,7 @@
# gendry.jakehillion-terminals.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/gendry.jakehillion-terminals.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,14 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.64.50.21:7654"
},
{
"url": "http://10.64.50.25:7654"
}
]
}
}

View File

@ -2,8 +2,6 @@
{
imports = [
../../modules/common/default.nix
../../modules/spotify/default.nix
./bluetooth.nix
./hardware-configuration.nix
];
@ -17,6 +15,24 @@
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
];
boot.initrd = {
availableKernelModules = [ "r8169" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices."root".secretFile = "/data/disk_encryption.jwe";
};
};
custom.defaults = true;
## Custom scheduler
custom.sched_ext.enable = true;
## Impermanence
custom.impermanence = {
enable = true;
@ -29,6 +45,13 @@
];
};
## Enable ZRAM swap to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Desktop
custom.users.jake.password = true;
custom.desktop.awesome.enable = true;
@ -36,9 +59,7 @@
## Resilio
custom.resilio.enable = true;
services.resilio.deviceName = "gendry.jakehillion-terminals";
services.resilio.directoryRoot = "/data/sync";
services.resilio.storagePath = "/data/sync/.sync";
custom.resilio.folders =
let
@ -61,9 +82,9 @@
## Tailscale
age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".file = ../../secrets/tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path;
authKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path;
};
security.sudo.wheelNeedsPassword = lib.mkForce true;
@ -76,19 +97,13 @@
boot.initrd.kernelModules = [ "amdgpu" ];
services.xserver.videoDrivers = [ "amdgpu" ];
## Spotify
home-manager.users.jake.services.spotifyd.settings = {
global = {
device_name = "Gendry";
device_type = "computer";
bitrate = 320;
};
};
users.users."${config.custom.user}" = {
packages = with pkgs; [
prismlauncher
];
};
## Networking
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
};
}

View File

@ -28,7 +28,10 @@
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e";
boot.initrd.luks.devices."root" = {
device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e";
allowDiscards = true;
};
fileSystems."/data" =
{

View File

@ -1,79 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.05";
networking.hostName = "jorah";
networking.domain = "cx.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
## Impermanence
custom.impermanence.enable = true;
## Custom Services
custom = {
locations.autoServe = true;
services.version_tracker.enable = true;
www.global.enable = true;
};
## Filesystems
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# By default both /data and /nix would be scrubbed. They are the same filesystem so this is wasteful.
fileSystems = [ "/data" ];
};
## Networking
systemd.network = {
enable = true;
networks."enp5s0".extraConfig = ''
[Match]
Name = enp5s0
[Network]
Address = 2a01:4f9:4b:3953::2/64
Gateway = fe80::1
'';
};
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
3022 # Gitea SSH (accessed via public 22)
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
enp5s0 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
8080 # Unifi (inform)
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
3478 # Unifi STUN
];
};
};
};
## Tailscale
age.secrets."tailscale/jorah.cx.ts.hillion.co.uk".file = ../../secrets/tailscale/jorah.cx.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/jorah.cx.ts.hillion.co.uk".path;
ipv4Addr = "100.96.143.138";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6260:8f8a";
};
};
}

View File

@ -0,0 +1,50 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "li";
networking.domain = "pop.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services
custom.locations.autoServe = true;
# Networking
## Tailscale
age.secrets."tailscale/li.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/li.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/li.pop.ts.hillion.co.uk".path;
useRoutingFeatures = "server";
extraUpFlags = [ "--advertise-routes" "192.168.1.0/24" ];
};
## Enable ZRAM to make up for 2GB of RAM
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
networking.firewall.interfaces = {
"end0" = {
allowedTCPPorts = [
7654 # Tang
];
};
};
};
}

View File

@ -0,0 +1,75 @@
{ config, pkgs, lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "merlin";
networking.domain = "rig.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "igc" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
boot.kernelPackages = pkgs.linuxPackages_latest;
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
custom.users.jake.password = true;
security.sudo.wheelNeedsPassword = lib.mkForce true;
# Networking
networking = {
interfaces.enp171s0.name = "eth0";
interfaces.enp172s0.name = "eth1";
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
};
};
};
## Tailscale
age.secrets."tailscale/merlin.rig.ts.hillion.co.uk".file = ../../secrets/tailscale/merlin.rig.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/merlin.rig.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,70 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -6,35 +6,23 @@
{
imports =
[
(modulesPath + "/profiles/qemu-guest.nix")
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "virtio_pci" "virtio_scsi" "sd_mod" "sr_mod" ];
boot.initrd.availableKernelModules = [ "xhci_pci" "thunderbolt" "nvme" "usbhid" "usb_storage" "sd_mod" "rtsx_pci_sdmmc" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "/dev/disk/by-uuid/6d59bd4b-439d-4480-897c-4480ea6fbe56";
fsType = "ext4";
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/01a351b8-cf66-4a31-9804-0b4145e69153";
fsType = "btrfs";
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.ens18.useDHCP = lib.mkDefault true;
# networking.interfaces.tailscale0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp171s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp172s0.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp173s0f0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -3,7 +3,6 @@
{
imports = [
./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix
];
@ -13,14 +12,17 @@
networking.hostName = "microserver";
networking.domain = "home.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services
custom.locations.autoServe = true;
# Networking
## Tailscale
age.secrets."tailscale/microserver.home.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.home.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path;
advertiseRoutes = [ "10.64.50.0/24" "10.239.19.0/24" ];
advertiseExitNode = true;
authKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path;
};
## Enable IoT VLAN
@ -31,18 +33,24 @@
};
};
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
hardware = {
bluetooth.enable = true;
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
networking.firewall.interfaces."tailscale0".allowedTCPPorts = [
1883 # MQTT server
];
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall.interfaces = {
"eth0" = {
allowedUDPPorts = [
];
allowedTCPPorts = [
7654 # Tang
];
};
};
};
}

View File

@ -1,35 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "22.05";
networking.hostName = "microserver";
networking.domain = "parents.ts.hillion.co.uk";
# Networking
## Tailscale
age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.parents.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".path;
advertiseRoutes = [ "192.168.1.0/24" ];
};
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
};
}

View File

@ -0,0 +1,7 @@
# phoenix.st.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/phoenix.st.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,14 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.64.50.21:7654"
},
{
"url": "http://10.64.50.25:7654"
}
]
}
}

View File

@ -0,0 +1,161 @@
{ config, pkgs, lib, ... }:
let
zpool_name = "practical-defiant-coffee";
in
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "phoenix";
networking.domain = "st.ts.hillion.co.uk";
networking.hostId = "4d7241e9";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
"zfs.zfs_arc_max=34359738368"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "igc" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
"disk1-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
custom.users.jake.password = true; # TODO: remove me once booting has stabilised
## Filesystems
boot.supportedFilesystems = [ "zfs" ];
boot.zfs = {
forceImportRoot = false;
extraPools = [ zpool_name ];
};
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# All filesystems includes the BTRFS parts of all the hard drives. This
# would take forever and is redundant as they get fully read regularly.
fileSystems = [ "/data" ];
};
services.zfs.autoScrub = {
enable = true;
interval = "Wed, 02:00";
};
## Resilio
custom.resilio = {
enable = true;
backups.enable = true;
folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
};
services.resilio.directoryRoot = "/${zpool_name}/sync";
## Chia
age.secrets."chia/farmer.key" = {
file = ../../secrets/chia/farmer.key.age;
owner = "chia";
group = "chia";
};
custom.chia = {
enable = true;
keyFile = config.age.secrets."chia/farmer.key".path;
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 8;
};
## Restic
custom.services.restic.path = "/${zpool_name}/backups/restic";
## Backups
### Git
custom.backups.git = {
enable = true;
extraRepos = [ "https://gitea.hillion.co.uk/JakeHillion/nixos.git" ];
};
## Downloads
custom.services.downloads = {
metadataPath = "/${zpool_name}/downloads/metadata";
downloadCachePath = "/${zpool_name}/downloads/torrents";
filmsPath = "/${zpool_name}/media/films";
tvPath = "/${zpool_name}/media/tv";
};
## Plex
users.users.plex.extraGroups = [ "mediaaccess" ];
services.plex.enable = true;
## Networking
networking = {
interfaces.enp4s0.name = "eth0";
interfaces.enp5s0.name = "eth1";
interfaces.enp6s0.name = "eth2";
interfaces.enp8s0.name = "eth3";
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
32400 # Plex
];
allowedUDPPorts = lib.mkForce [ ];
};
};
};
## Tailscale
age.secrets."tailscale/phoenix.st.ts.hillion.co.uk".file = ../../secrets/tailscale/phoenix.st.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/phoenix.st.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,103 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
disk1 = {
type = "disk";
device = "/dev/nvme1n1";
content = {
type = "gpt";
partitions = {
disk1-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk1-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
extraArgs = [
"-d raid1"
"/dev/mapper/disk0-crypt"
];
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -9,23 +9,11 @@
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.availableKernelModules = [ "nvme" "ahci" "xhci_pci" "thunderbolt" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "/dev/disk/by-uuid/cb48d4ed-d268-490c-9977-2b5d31ce2c1b";
fsType = "btrfs";
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/BC57-0AF6";
fsType = "vfat";
};
fileSystems."/mnt/d0" =
{
device = "/dev/disk/by-uuid/9136434d-d883-4118-bd01-903f720e5ce1";
@ -62,14 +50,28 @@
fsType = "btrfs";
};
swapDevices = [ ];
fileSystems."/mnt/d6" =
{
device = "/dev/disk/by-uuid/b461e07d-39ab-46b4-b1d1-14c2e0791915";
fsType = "btrfs";
};
fileSystems."/mnt/d7" =
{
device = "/dev/disk/by-uuid/eb8d32d0-e506-449b-8dbc-585ba05c4252";
fsType = "btrfs";
};
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp7s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp4s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp5s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp8s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -2,7 +2,6 @@
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
@ -19,6 +18,8 @@
"net.ipv4.conf.all.forwarding" = true;
};
custom.defaults = true;
## Interactive password
custom.users.jake.password = true;
@ -31,6 +32,14 @@
nat.enable = lib.mkForce false;
useDHCP = false;
vlans = {
cameras = {
id = 3;
interface = "eth2";
};
};
interfaces = {
enp1s0 = {
name = "eth0";
@ -55,6 +64,14 @@
}
];
};
cameras /* cameras@eth2 */ = {
ipv4.addresses = [
{
address = "10.133.145.1";
prefixLength = 24;
}
];
};
enp4s0 = { name = "eth3"; };
enp5s0 = { name = "eth4"; };
enp6s0 = { name = "eth5"; };
@ -81,8 +98,10 @@
ip protocol icmp counter accept comment "accept all ICMP types"
iifname "eth0" ct state { established, related } counter accept
iifname "eth0" drop
iifname "eth0" tcp dport 22 counter accept comment "SSH"
iifname { "eth0", "cameras" } ct state { established, related } counter accept
iifname { "eth0", "cameras" } drop
}
chain forward {
@ -91,6 +110,7 @@
iifname {
"eth1",
"eth2",
"tailscale0",
} oifname {
"eth0",
} counter accept comment "Allow trusted LAN to WAN"
@ -100,19 +120,14 @@
} oifname {
"eth1",
"eth2",
} ct state established,related counter accept comment "Allow established back to LANs"
"tailscale0",
} ct state { established,related } counter accept comment "Allow established back to LANs"
ip daddr 10.64.50.20 tcp dport 32400 counter accept comment "Plex"
iifname "tailscale0" oifname { "eth1", "eth2" } counter accept comment "Allow LAN access from Tailscale"
iifname { "eth1", "eth2" } oifname "tailscale0" ct state { established,related } counter accept comment "Allow established back to Tailscale"
ip daddr 10.64.50.20 tcp dport 8444 counter accept comment "Chia"
ip daddr 10.64.50.20 tcp dport 28967 counter accept comment "zfs.tywin.storj"
ip daddr 10.64.50.20 udp dport 28967 counter accept comment "zfs.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 udp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 udp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28970 counter accept comment "d2.tywin.storj"
ip daddr 10.64.50.20 udp dport 28970 counter accept comment "d2.tywin.storj"
ip daddr 10.64.50.27 tcp dport 32400 counter accept comment "Plex"
ip daddr 10.64.50.21 tcp dport 7654 counter accept comment "Tang"
}
}
@ -120,22 +135,17 @@
chain prerouting {
type nat hook prerouting priority filter; policy accept;
iifname eth0 tcp dport 32400 counter dnat to 10.64.50.20
iifname eth0 tcp dport 8444 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28967 counter dnat to 10.64.50.20
iifname eth0 udp dport 28967 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28968 counter dnat to 10.64.50.20
iifname eth0 udp dport 28968 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28969 counter dnat to 10.64.50.20
iifname eth0 udp dport 28969 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28970 counter dnat to 10.64.50.20
iifname eth0 udp dport 28970 counter dnat to 10.64.50.20
iifname eth0 tcp dport 32400 counter dnat to 10.64.50.27
iifname eth0 tcp dport 7654 counter dnat to 10.64.50.21
}
chain postrouting {
type nat hook postrouting priority filter; policy accept;
oifname "eth0" masquerade
iifname tailscale0 oifname eth1 snat to 10.64.50.1
iifname tailscale0 oifname eth2 snat to 10.239.19.1
}
}
'';
@ -149,12 +159,42 @@
settings = {
interfaces-config = {
interfaces = [ "eth1" "eth2" ];
interfaces = [ "eth1" "eth2" "cameras" ];
};
lease-database = {
type = "memfile";
persist = false;
persist = true;
name = "/var/lib/kea/dhcp4.leases";
};
option-def = [
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
type = "string";
array = false;
}
];
client-classes = [
{
name = "APC";
test = "option[vendor-class-identifier].text == 'APC'";
option-data = [
{
always-send = true;
name = "vendor-encapsulated-options";
}
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
data = "1APC";
}
];
}
];
subnet4 = [
{
subnet = "10.64.50.0/24";
@ -173,23 +213,25 @@
}
{
name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8";
}
];
reservations = [
{
# tywin.storage.ts.hillion.co.uk
hw-address = "c8:7f:54:6d:e1:03";
ip-address = "10.64.50.20";
hostname = "tywin";
}
{
# syncbox
hw-address = "00:1e:06:49:06:1e";
ip-address = "10.64.50.22";
hostname = "syncbox";
data = "10.64.50.1, 1.1.1.1, 8.8.8.8";
}
];
reservations = lib.lists.remove null (lib.lists.imap0
(i: el: if el == null then null else {
ip-address = "10.64.50.${toString (20 + i)}";
inherit (el) hw-address hostname;
}) [
null
{ hostname = "microserver"; hw-address = "e4:5f:01:b4:58:95"; }
{ hostname = "theon"; hw-address = "00:1e:06:49:06:1e"; }
{ hostname = "server-switch"; hw-address = "84:d8:1b:9d:0d:85"; }
{ hostname = "apc-ap7921"; hw-address = "00:c0:b7:6b:f4:34"; }
{ hostname = "sodium"; hw-address = "d8:3a:dd:c3:d6:2b"; }
{ hostname = "gendry"; hw-address = "18:c0:4d:35:60:1e"; }
{ hostname = "phoenix"; hw-address = "a8:b8:e0:04:17:a5"; }
{ hostname = "merlin"; hw-address = "b0:41:6f:13:20:14"; }
{ hostname = "stinger"; hw-address = "7c:83:34:be:30:dd"; }
]);
}
{
subnet = "10.239.19.0/24";
@ -208,37 +250,113 @@
}
{
name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8";
data = "10.239.19.1, 1.1.1.1, 8.8.8.8";
}
];
reservations = [
{
# bedroom-everything-presence-one
hw-address = "40:22:d8:e0:1d:50";
ip-address = "10.239.19.2";
hostname = "bedroom-everything-presence-one";
}
{
# living-room-everything-presence-one
hw-address = "40:22:d8:e0:0f:78";
ip-address = "10.239.19.3";
hostname = "living-room-everything-presence-one";
}
{
hw-address = "a0:7d:9c:b0:f0:14";
ip-address = "10.239.19.4";
hostname = "hallway-wall-tablet";
}
{
hw-address = "d8:3a:dd:c3:d6:2b";
ip-address = "10.239.19.5";
hostname = "sodium";
}
{
hw-address = "48:da:35:6f:f2:4b";
ip-address = "10.239.19.6";
hostname = "hammer";
}
{
hw-address = "48:da:35:6f:83:b8";
ip-address = "10.239.19.7";
hostname = "charlie";
}
];
}
{
subnet = "10.133.145.0/24";
interface = "cameras";
pools = [{
pool = "10.133.145.64 - 10.133.145.254";
}];
option-data = [
{
name = "routers";
data = "10.133.145.1";
}
{
name = "broadcast-address";
data = "10.133.145.255";
}
{
name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8";
}
];
reservations = [
];
}
];
};
};
};
unbound = {
enable = true;
settings = {
server = {
interface = [
"127.0.0.1"
"10.64.50.1"
"10.239.19.1"
];
access-control = [
"10.64.50.0/24 allow"
"10.239.19.0/24 allow"
];
};
forward-zone = [
{
name = ".";
forward-tls-upstream = "yes";
forward-addr = [
"1.1.1.1#cloudflare-dns.com"
"1.0.0.1#cloudflare-dns.com"
"8.8.8.8#dns.google"
"8.8.4.4#dns.google"
];
}
];
};
};
};
## Tailscale
age.secrets."tailscale/router.home.ts.hillion.co.uk".file = ../../secrets/tailscale/router.home.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path;
ipv4Addr = "100.105.71.48";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730";
authKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path;
useRoutingFeatures = "server";
extraSetFlags = [
"--advertise-routes"
"10.64.50.0/24,10.239.19.0/24,10.133.145.0/24"
"--advertise-exit-node"
"--netfilter-mode=off"
];
};
## Enable btrfs compression
@ -262,9 +380,34 @@
};
services.caddy = {
enable = true;
virtualHosts."http://graphs.router.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
extraConfig = "reverse_proxy unix///run/netdata/netdata.sock";
virtualHosts = {
"graphs.router.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy unix///run/netdata/netdata.sock
'';
};
"hammer.kvm.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://10.239.19.6
'';
};
"charlie.kvm.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://10.239.19.7
'';
};
};
};
users.users.caddy.extraGroups = [ "netdata" ];

View File

@ -0,0 +1,103 @@
{ config, pkgs, lib, nixos-hardware, ... }:
{
imports = [
"${nixos-hardware}/raspberry-pi/5/default.nix"
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "sodium";
networking.domain = "pop.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/tmp
btrfs subvolume snapshot /cache/empty_snapshot /cache/tmp
chmod 1777 /cache/tmp
'';
## CA server
custom.ca.service.enable = true;
### nix only supports build-dir from 2.22. bind mount /tmp to something persistent instead.
fileSystems."/tmp" = {
device = "/cache/tmp";
options = [ "bind" ];
};
# nix = {
# settings = {
# build-dir = "/cache/tmp/";
# };
# };
## Custom Services
custom.locations.autoServe = true;
custom.www.home.enable = true;
custom.www.iot.enable = true;
custom.services.isponsorblocktv.enable = true;
# Networking
networking = {
interfaces.end0.name = "eth0";
vlans = {
iot = {
id = 2;
interface = "eth0";
};
};
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
7654 # Tang
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
iot = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
};
};
## Tailscale
age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/sodium.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".path;
};
};
}

View File

@ -9,9 +9,9 @@
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" "usbhid" "usb_storage" "sr_mod" ];
boot.initrd.availableKernelModules = [ "usbhid" "usb_storage" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
fileSystems."/" =
@ -21,24 +21,32 @@
options = [ "mode=0755" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/417B-1063";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/nix" =
{
device = "/dev/disk/by-id/nvme-KXG60ZNV512G_TOSHIBA_106S10VHT9LM_1-part2";
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
fileSystems."/data" =
{
device = "/dev/disk/by-id/nvme-KXG60ZNV512G_TOSHIBA_106S10VHT9LM_1-part2";
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=data" ];
};
fileSystems."/boot" =
fileSystems."/cache" =
{
device = "/dev/disk/by-uuid/4D7E-8DE8";
fsType = "vfat";
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=cache" ];
};
swapDevices = [ ];
@ -48,8 +56,8 @@
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp5s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enu1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlan0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
}

View File

@ -0,0 +1 @@
aarch64-linux

View File

@ -0,0 +1,84 @@
{ config, pkgs, lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "stinger";
networking.domain = "pop.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "r8169" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
hardware = {
bluetooth.enable = true;
};
# Networking
networking = {
interfaces.enp1s0.name = "eth0";
vlans = {
iot = {
id = 2;
interface = "eth0";
};
};
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
1400 # HA Sonos
21063 # HomeKit
];
allowedUDPPorts = lib.mkForce [
5353 # HomeKit
];
};
};
};
## Tailscale
age.secrets."tailscale/stinger.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/stinger.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/stinger.pop.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,70 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -0,0 +1,28 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp0s20f0u2.useDHCP = lib.mkDefault true;
# networking.interfaces.enp1s0.useDHCP = lib.mkDefault true;
# networking.interfaces.wlo1.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -2,7 +2,6 @@
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
@ -15,14 +14,18 @@
boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true;
custom.defaults = true;
## Custom Services
custom = {
locations.autoServe = true;
};
## Networking
networking.useNetworkd = true;
systemd.network.enable = true;
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
@ -39,11 +42,9 @@
## Tailscale
age.secrets."tailscale/theon.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/theon.storage.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/theon.storage.ts.hillion.co.uk".path;
ipv4Addr = "100.104.142.22";
ipv6Addr = "fd7a:115c:a1e0::4aa8:8e16";
authKeyFile = config.age.secrets."tailscale/theon.storage.ts.hillion.co.uk".path;
};
## Packages

View File

@ -1,223 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "22.11";
networking.hostName = "tywin";
networking.domain = "storage.ts.hillion.co.uk";
networking.hostId = "2a9b6df5";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.locations.autoServe = true;
## Tailscale
age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/tywin.storage.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".path;
ipv4Addr = "100.115.31.91";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6273:1f5b";
};
## Filesystems
fileSystems."/".options = [ "compress=zstd" ];
boot.supportedFilesystems = [ "zfs" ];
boot.zfs = {
forceImportRoot = false;
extraPools = [ "data" ];
};
boot.kernelParams = [ "zfs.zfs_arc_max=25769803776" ];
services.zfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
};
## Backups
### Git
age.secrets."git/git_backups_ecdsa".file = ../../secrets/git/git_backups_ecdsa.age;
age.secrets."git/git_backups_remotes".file = ../../secrets/git/git_backups_remotes.age;
custom.backups.git = {
enable = true;
sshKey = config.age.secrets."git/git_backups_ecdsa".path;
reposFile = config.age.secrets."git/git_backups_remotes".path;
repos = [ "https://gitea.hillion.co.uk/JakeHillion/nixos.git" ];
};
## Resilio
custom.resilio.enable = true;
services.resilio.deviceName = "tywin.storage";
services.resilio.directoryRoot = "/data/users/jake/sync";
services.resilio.storagePath = "/data/users/jake/sync/.sync";
custom.resilio.folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
age.secrets."resilio/restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "rslsync";
group = "rslsync";
};
services.restic.backups."sync" = {
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G";
user = "rslsync";
passwordFile = config.age.secrets."resilio/restic/128G.key".path;
timerConfig = {
Persistent = true;
OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m";
};
paths = [ "/data/users/jake/sync" ];
exclude = [
"/data/users/jake/sync/.sync"
"/data/users/jake/sync/*/.sync"
"/data/users/jake/sync/resources/media/films"
"/data/users/jake/sync/resources/media/iso"
"/data/users/jake/sync/resources/media/tv"
"/data/users/jake/sync/dad/media"
];
};
## Restic
age.secrets."restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "restic";
group = "restic";
};
age.secrets."restic/1.6T.key" = {
file = ../../secrets/restic/1.6T.age;
owner = "restic";
group = "restic";
};
services.restic.server = {
enable = true;
appendOnly = true;
extraFlags = [ "--no-auth" ];
dataDir = "/data/backups/restic";
listenAddress = "127.0.0.1:8000"; # TODO: can this be a Unix socket?
};
services.caddy = {
enable = true;
virtualHosts."http://restic.tywin.storage.ts.hillion.co.uk".extraConfig = ''
bind ${config.custom.tailscale.ipv4Addr} ${config.custom.tailscale.ipv6Addr}
reverse_proxy http://localhost:8000
'';
};
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
services.restic.backups."prune-128G" = {
repository = "/data/backups/restic/128G";
user = "restic";
passwordFile = config.age.secrets."restic/128G.key".path;
timerConfig = {
Persistent = true;
OnCalendar = "02:30";
RandomizedDelaySec = "1h";
};
pruneOpts = [
"--keep-last 48"
"--keep-within-hourly 7d"
"--keep-within-daily 1m"
"--keep-within-weekly 6m"
"--keep-within-monthly 24m"
];
};
services.restic.backups."prune-1.6T" = {
repository = "/data/backups/restic/1.6T";
user = "restic";
passwordFile = config.age.secrets."restic/1.6T.key".path;
timerConfig = {
Persistent = true;
OnCalendar = "Wed, 02:30";
RandomizedDelaySec = "4h";
};
pruneOpts = [
"--keep-within-daily 14d"
"--keep-within-weekly 2m"
"--keep-within-monthly 18m"
];
};
## Chia
age.secrets."chia/farmer.key" = {
file = ../../secrets/chia/farmer.key.age;
owner = "chia";
group = "chia";
};
custom.chia = {
enable = true;
openFirewall = true;
keyFile = config.age.secrets."chia/farmer.key".path;
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 7;
};
## Downloads
custom.services.downloads = {
metadataPath = "/data/downloads/metadata";
downloadCachePath = "/data/downloads/torrents";
filmsPath = "/data/media/films";
tvPath = "/data/media/tv";
};
## Plex
users.users.plex.extraGroups = [ "mediaaccess" ];
services.plex = {
enable = true;
openFirewall = true;
};
## Firewall
networking.firewall.interfaces."tailscale0".allowedTCPPorts = [
80 # Caddy (restic.tywin.storage.ts.)
14002 # Storj Dashboard (d0.)
14003 # Storj Dashboard (d1.)
14004 # Storj Dashboard (d2.)
14005 # Storj Dashboard (d3.)
];
};
}

View File

@ -1,67 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "22.05";
networking.hostName = "vm";
networking.domain = "strangervm.ts.hillion.co.uk";
boot.loader.grub = {
enable = true;
device = "/dev/sda";
};
## Custom Services
custom = {
locations.autoServe = true;
drone.server.path = "/data/drone";
};
## Networking
networking.interfaces.ens18.ipv4.addresses = [{
address = "10.72.164.3";
prefixLength = 24;
}];
networking.defaultGateway = "10.72.164.1";
networking.firewall = {
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
trustedInterfaces = lib.mkForce [
"lo"
"tailscale0"
];
interfaces = {
ens18 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
};
};
## Tailscale
age.secrets."tailscale/vm.strangervm.ts.hillion.co.uk".file = ../../secrets/tailscale/vm.strangervm.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/vm.strangervm.ts.hillion.co.uk".path;
ipv4Addr = "100.110.89.111";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:626e:596f";
};
## Backups
services.postgresqlBackup.location = "/data/backup/postgres";
};
}

View File

@ -2,7 +2,8 @@
{
imports = [
./git.nix
./git/default.nix
./homeassistant.nix
./matrix.nix
];
}

View File

@ -7,25 +7,17 @@ in
options.custom.backups.git = {
enable = lib.mkEnableOption "git";
repos = lib.mkOption {
extraRepos = lib.mkOption {
description = "A list of remotes to clone.";
type = with lib.types; listOf str;
default = [ ];
};
reposFile = lib.mkOption {
description = "A file containing the remotes to clone, one per line.";
type = with lib.types; nullOr str;
default = null;
};
sshKey = lib.mkOption {
description = "SSH private key to use when cloning repositories over SSH.";
type = with lib.types; nullOr str;
default = null;
};
};
config = lib.mkIf cfg.enable {
age.secrets."git-backups/restic/128G".file = ../../secrets/restic/128G.age;
age.secrets."git/git_backups_ecdsa".file = ../../../secrets/git/git_backups_ecdsa.age;
age.secrets."git/git_backups_remotes".file = ../../../secrets/git/git_backups_remotes.age;
age.secrets."git-backups/restic/128G".file = ../../../secrets/restic/128G.age;
systemd.services.backup-git = {
description = "Git repo backup service.";
@ -37,9 +29,10 @@ in
WorkingDirectory = "%C/backup-git";
LoadCredential = [
"id_ecdsa:${config.age.secrets."git/git_backups_ecdsa".path}"
"repos_file:${config.age.secrets."git/git_backups_remotes".path}"
"restic_password:${config.age.secrets."git-backups/restic/128G".path}"
] ++ (if cfg.sshKey == null then [ ] else [ "id_ecdsa:${cfg.sshKey}" ])
++ (if cfg.reposFile == null then [ ] else [ "repos_file:${cfg.reposFile}" ]);
];
};
environment = {
@ -48,11 +41,12 @@ in
};
script = ''
set -x
shopt -s nullglob
# Read and deduplicate repos
${if cfg.reposFile == null then "" else "readarray -t raw_repos < $CREDENTIALS_DIRECTORY/repos_file"}
declare -A repos=(${builtins.concatStringsSep " " (builtins.map (x : "[${x}]=1") cfg.repos)})
readarray -t raw_repos < $CREDENTIALS_DIRECTORY/repos_file
declare -A repos=(${builtins.concatStringsSep " " (builtins.map (x : "[${x}]=1") cfg.extraRepos)})
for repo in ''${raw_repos[@]}; do repos[$repo]=1; done
# Clean up existing repos
@ -79,7 +73,7 @@ in
# Backup to Restic
${pkgs.restic}/bin/restic \
-r rest:http://restic.tywin.storage.ts.hillion.co.uk/128G \
-r rest:https://restic.ts.hillion.co.uk/128G \
--cache-dir .restic --exclude .restic \
backup .
@ -93,9 +87,9 @@ in
wantedBy = [ "timers.target" ];
timerConfig = {
Persistent = true;
OnBootSec = "10m";
OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m";
Unit = "backup-git.service";
};
};
};

View File

@ -0,0 +1 @@
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIc3WVROMCifYtqHRWf5gZAOQFdpbcSYOC0JckKzUVM5sGdXtw3VXNiVqY3npdMizS4e1V8Hh77UecD3q9CLkMA= backups-git@nixos

View File

@ -0,0 +1,59 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.backups.homeassistant;
in
{
options.custom.backups.homeassistant = {
enable = lib.mkEnableOption "homeassistant";
};
config = lib.mkIf cfg.enable {
age.secrets."backups/homeassistant/restic/128G" = {
file = ../../secrets/restic/128G.age;
owner = "hass";
group = "hass";
};
age.secrets."backups/homeassistant/restic/1.6T" = {
file = ../../secrets/restic/1.6T.age;
owner = "postgres";
group = "postgres";
};
services = {
postgresqlBackup = {
enable = true;
compression = "none"; # for better diffing
databases = [ "homeassistant" ];
};
restic.backups = {
"homeassistant-config" = {
user = "hass";
timerConfig = {
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:https://restic.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/homeassistant/restic/128G".path;
paths = [
config.services.home-assistant.configDir
];
};
"homeassistant-database" = {
user = "postgres";
timerConfig = {
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:https://restic.ts.hillion.co.uk/1.6T";
passwordFile = config.age.secrets."backups/homeassistant/restic/1.6T".path;
paths = [
"${config.services.postgresqlBackup.location}/homeassistant.sql"
];
};
};
};
};
}

View File

@ -24,7 +24,7 @@ in
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G";
repository = "rest:https://restic.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/matrix/restic/128G".path;
paths = [
"${config.services.postgresqlBackup.location}/matrix-synapse.sql"

11
modules/ca/README.md Normal file
View File

@ -0,0 +1,11 @@
# ca
Getting the certificates in the right place is a manual process (for now, at least). This is to keep the most control over the root certificate's key and allow manual cycling. The manual commands should be run on a trusted machine.
Creating a 10 year root certificate:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME' cert.pem key.pem --kty=EC --curve=P-521 --profile=root-ca --not-after=87600h
Creating the intermediate key:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME (sodium.pop.ts.hillion.co.uk)' intermediate_cert.pem intermediate_key.pem --kty=EC --curve=P-521 --profile=intermediate-ca --not-after=8760h --ca=$NIXOS_ROOT/modules/ca/cert.pem --ca-key=DOWNLOADED_KEY.pem

13
modules/ca/cert.pem Normal file
View File

@ -0,0 +1,13 @@
-----BEGIN CERTIFICATE-----
MIIB+TCCAVqgAwIBAgIQIZdaIUsuJdjnu7DQP1N8oTAKBggqhkjOPQQDBDAXMRUw
EwYDVQQDEwxIaWxsaW9uIEFDTUUwHhcNMjQwODAxMjIyMjEwWhcNMzQwNzMwMjIy
MjEwWjAXMRUwEwYDVQQDEwxIaWxsaW9uIEFDTUUwgZswEAYHKoZIzj0CAQYFK4EE
ACMDgYYABAAJI3z1PrV97EFc1xaENcr6ML1z6xdXTy+ReHtf42nWsw+c3WDKzJ45
+xHJ/p2BTOR5+NQ7RGQQ68zmFJnEYTYDogAw6U9YzxxDGlG1HlgnZ9PPmXoF+PFl
Zy2WZCiDPx5KDJcjTPzLV3ITt4fl3PMA12BREVeonvrvRLcpVrMfS2b7wKNFMEMw
DgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwHQYDVR0OBBYEFFBT
fMT0uUbS+lVUbGKK8/SZHPISMAoGCCqGSM49BAMEA4GMADCBiAJCAPNIwrQztPrN
MaHB3J0lNVODIGwQWblt99vnjqIWOKJhgckBxaElyInsyt8dlnmTCpOCJdY4BA+K
Nr87AfwIWdAaAkIBV5i4zXPXVKblGKnmM0FomFSbq2cYE3pmi5BO1StakH1kEHlf
vbkdwFgkw2MlARp0Ka3zbWivBG9zjPoZtsL/8tk=
-----END CERTIFICATE-----

14
modules/ca/consumer.nix Normal file
View File

@ -0,0 +1,14 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.consumer;
in
{
options.custom.ca.consumer = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
security.pki.certificates = [ (builtins.readFile ./cert.pem) ];
};
}

8
modules/ca/default.nix Normal file
View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./consumer.nix
./service.nix
];
}

48
modules/ca/service.nix Normal file
View File

@ -0,0 +1,48 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.service;
in
{
options.custom.ca.service = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
users.users.step-ca.uid = config.ids.uids.step-ca;
users.groups.step-ca.gid = config.ids.gids.step-ca;
services.step-ca = {
enable = true;
address = config.custom.dns.tailscale.ipv4;
port = 8443;
intermediatePasswordFile = "/data/system/ca/intermediate.psk";
settings = {
root = ./cert.pem;
crt = "/data/system/ca/intermediate.crt";
key = "/data/system/ca/intermediate.pem";
dnsNames = [ "ca.ts.hillion.co.uk" ];
logger = { format = "text"; };
db = {
type = "badgerv2";
dataSource = "/var/lib/step-ca/db";
};
authority = {
provisioners = [
{
type = "ACME";
name = "acme";
}
];
};
};
};
};
}

View File

@ -22,8 +22,8 @@ in
default = null;
};
plotDirectories = lib.mkOption {
type = with lib.types; nullOr (listOf str);
default = null;
type = with lib.types; listOf str;
default = [ ];
};
openFirewall = lib.mkOption {
type = lib.types.bool;
@ -46,7 +46,7 @@ in
};
virtualisation.oci-containers.containers.chia = {
image = "ghcr.io/chia-network/chia:2.1.4";
image = "ghcr.io/chia-network/chia:2.4.3";
ports = [ "8444" ];
extraOptions = [
"--uidmap=0:${toString config.users.users.chia.uid}:1"
@ -62,6 +62,11 @@ in
};
};
systemd.tmpfiles.rules = [
"d ${cfg.path} 0700 chia chia - -"
"d ${cfg.path}/.chia 0700 chia chia - -"
];
networking.firewall = lib.mkIf cfg.openFirewall {
allowedTCPPorts = [ 8444 ];
};

View File

@ -1,59 +0,0 @@
{ pkgs, lib, config, agenix, ... }:
{
imports = [
../home/default.nix
./shell.nix
./ssh.nix
];
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users."jake" = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
gh
git
htop
nix
sapling
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
custom.hostinfo.enable = true;
}

View File

@ -1,43 +0,0 @@
{ pkgs, lib, config, ... }:
{
users.users."jake".openssh.authorizedKeys.keys = [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo"
];
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHosts = {
# Global Internet hosts
"ssh.gitea.hillion.co.uk".publicKey = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==";
"server.stranger.proxmox.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9d5u/VaeRTQUQfu5JzCRa+zij/DtrPNWOfr+jM4iDp";
# Tailscale hosts
"dancefloor.dancefloor.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY";
"gendry.jakehillion.terminals.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c";
"homeassistant.homeassistant.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux";
"microserver.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw";
"microserver.parents.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0cjjNQPnJwpu4wcYmvfjB1jlIfZwMxT+3nBusoYQFr";
"pbs.proxmox.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGY6ky2sQjg/bLRUWOUERmAOqboAjy+9PkE8sU+angx";
"router.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu";
"router.stranger.proxmox.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHq9tITN59FJfGoyOPNgP1QyJ0ohbVQS8OZtRO960Uxk";
"stranger.proxmox.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9d5u/VaeRTQUQfu5JzCRa+zij/DtrPNWOfr+jM4iDp";
"tywin.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGATsjWO0qZNFp2BhfgDuWi+e/ScMkFxp79N2OZoed1k";
"vm.strangervm.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb9mgyD/G3Rt6lvO4c0hoaVOlLE8e3+DUfAoB1RI5cy";
};
programs.ssh.knownHostsFiles = [ ./github_known_hosts ];
}

View File

@ -3,26 +3,25 @@
{
imports = [
./backups/default.nix
./ca/default.nix
./chia.nix
./common/hostinfo.nix
./defaults.nix
./desktop/awesome/default.nix
./drone/default.nix
./dns.nix
./home/default.nix
./hostinfo.nix
./ids.nix
./impermanence.nix
./locations.nix
./prometheus/default.nix
./resilio.nix
./services/downloads.nix
./services/gitea.nix
./services/mastodon/default.nix
./services/matrix.nix
./services/unifi.nix
./services/version_tracker.nix
./services/zigbee2mqtt.nix
./sched_ext.nix
./services/default.nix
./shell/default.nix
./ssh/default.nix
./storj.nix
./tailscale.nix
./users.nix
./www/global.nix
./www/www-repo.nix
./www/default.nix
];
options.custom = {

70
modules/defaults.nix Normal file
View File

@ -0,0 +1,70 @@
{ pkgs, nixpkgs-unstable, lib, config, agenix, ... }:
{
options.custom.defaults = lib.mkEnableOption "defaults";
config = lib.mkIf config.custom.defaults {
hardware.enableAllFirmware = true;
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users.${config.custom.user} = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
uid = config.ids.uids.${config.custom.user};
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
gh
git
htop
nix
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
nix.registry.nixpkgs-unstable.to = {
type = "path";
path = nixpkgs-unstable;
};
# Delegation
custom.ca.consumer.enable = true;
custom.dns.enable = true;
custom.home.defaults = true;
custom.hostinfo.enable = true;
custom.prometheus.client.enable = true;
custom.shell.enable = true;
custom.ssh.enable = true;
};
}

124
modules/dns.nix Normal file
View File

@ -0,0 +1,124 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.dns;
in
{
options.custom.dns = {
enable = lib.mkEnableOption "dns";
authoritative = {
ipv4 = lib.mkOption {
description = "authoritative ipv4 mappings";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "authoritative ipv6 mappings";
readOnly = true;
};
};
tailscale =
{
ipv4 = lib.mkOption {
description = "tailscale ipv4 address";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "tailscale ipv6 address";
readOnly = true;
};
};
};
config = lib.mkIf cfg.enable {
custom.dns.authoritative = {
ipv4 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "100.113.188.46";
};
home = {
microserver = "100.105.131.47";
router = "100.105.71.48";
};
jakehillion-terminals = { gendry = "100.70.100.77"; };
lt = { be = "100.105.166.79"; };
pop = {
li = "100.106.87.35";
sodium = "100.87.188.4";
stinger = "100.117.89.126";
};
rig = {
merlin = "100.69.181.56";
};
st = {
phoenix = "100.92.37.106";
};
storage = {
theon = "100.104.142.22";
};
};
};
};
};
};
ipv6 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "fd7a:115c:a1e0::2a01:bc2f";
};
home = {
microserver = "fd7a:115c:a1e0:ab12:4843:cd96:6269:832f";
router = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730";
};
jakehillion-terminals = { gendry = "fd7a:115c:a1e0:ab12:4843:cd96:6246:644d"; };
lt = { be = "fd7a:115c:a1e0::9001:a64f"; };
pop = {
li = "fd7a:115c:a1e0::e701:5723";
sodium = "fd7a:115c:a1e0::3701:bc04";
stinger = "fd7a:115c:a1e0::8401:597e";
};
rig = {
merlin = "fd7a:115c:a1e0::8d01:b538";
};
st = {
phoenix = "fd7a:115c:a1e0::6901:256a";
};
storage = {
theon = "fd7a:115c:a1e0::4aa8:8e16";
};
};
};
};
};
};
};
custom.dns.tailscale =
let
lookupFqdn = lib.attrsets.attrByPath (lib.reverseList (lib.splitString "." config.networking.fqdn)) null;
in
{
ipv4 = lookupFqdn cfg.authoritative.ipv4;
ipv6 = lookupFqdn cfg.authoritative.ipv6;
};
networking.hosts =
let
mkHosts = hosts:
(lib.collect (x: (builtins.hasAttr "name" x && builtins.hasAttr "value" x))
(lib.mapAttrsRecursive
(path: value:
lib.nameValuePair value [ (lib.concatStringsSep "." (lib.reverseList path)) ])
hosts));
in
builtins.listToAttrs (mkHosts cfg.authoritative.ipv4 ++ mkHosts cfg.authoritative.ipv6);
};
}

View File

@ -1,7 +0,0 @@
{ ... }:
{
imports = [
./server.nix
];
}

View File

@ -1,43 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.drone.server;
in
{
options.custom.drone.server = {
enable = lib.mkEnableOption "drone server";
path = lib.mkOption {
type = lib.types.str;
default = "/var/lib/drone";
};
port = lib.mkOption {
type = lib.types.port;
default = 18733;
};
};
config = lib.mkIf cfg.enable {
age.secrets."drone/gitea_client_secret".file = ../../secrets/drone/gitea_client_secret.age;
age.secrets."drone/rpc_secret".file = ../../secrets/drone/rpc_secret.age;
virtualisation.oci-containers.containers."drone" = {
image = "drone/drone:2.21.0";
volumes = [ "${cfg.path}:/data" ];
ports = [ "${toString cfg.port}:80" ];
environment = {
DRONE_AGENTS_ENABLED = "true";
DRONE_GITEA_SERVER = "https://gitea.hillion.co.uk";
DRONE_GITEA_CLIENT_ID = "687ee331-ad9e-44fd-9e02-7f1c652754bb";
DRONE_SERVER_HOST = "drone.hillion.co.uk";
DRONE_SERVER_PROTO = "https";
DRONE_LOGS_DEBUG = "true";
DRONE_USER_CREATE = "username:JakeHillion,admin:true";
};
environmentFiles = [
config.age.secrets."drone/gitea_client_secret".path
config.age.secrets."drone/rpc_secret".path
];
};
};
}

View File

@ -3,24 +3,47 @@
{
imports = [
./git.nix
./neovim.nix
./tmux/default.nix
];
config = {
home-manager = {
users.root.home = {
stateVersion = "22.11";
options.custom.home.defaults = lib.mkEnableOption "home";
## Set an empty ZSH config and defer to the global one
file.".zshrc".text = "";
config = lib.mkIf config.custom.home.defaults {
home-manager =
let
stateVersion = if (builtins.compareVersions config.system.stateVersion "24.05") > 0 then config.system.stateVersion else "22.11";
in
{
users.root.home = {
inherit stateVersion;
## Set an empty ZSH config and defer to the global one
file.".zshrc".text = "";
};
users."${config.custom.user}" = {
home = {
inherit stateVersion;
};
services = {
ssh-agent.enable = true;
};
programs = {
zoxide = {
enable = true;
options = [ "--cmd cd" ];
};
zsh.enable = true;
};
};
};
users."${config.custom.user}".home = {
stateVersion = "22.11";
## Set an empty ZSH config and defer to the global one
file.".zshrc".text = "";
};
};
# Delegation
custom.home.git.enable = true;
custom.home.neovim.enable = true;
custom.home.tmux.enable = true;
};
}

View File

@ -1,21 +1,43 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.git;
in
{
home-manager.users.jake.programs.git = {
enable = true;
extraConfig = {
user = {
email = "jake@hillion.co.uk";
name = "Jake Hillion";
options.custom.home.git = {
enable = lib.mkEnableOption "git";
};
config = lib.mkIf cfg.enable {
home-manager.users.jake.programs = {
sapling = lib.mkIf (config.custom.user == "jake") {
enable = true;
userName = "Jake Hillion";
userEmail = "jake@hillion.co.uk";
extraConfig = {
ui = {
"merge:interactive" = ":merge3";
};
};
};
pull = {
rebase = true;
};
merge = {
conflictstyle = "diff3";
};
init = {
defaultBranch = "main";
git = lib.mkIf (config.custom.user == "jake") {
enable = true;
userName = "Jake Hillion";
userEmail = "jake@hillion.co.uk";
extraConfig = {
pull = {
rebase = true;
};
merge = {
conflictstyle = "diff3";
};
init = {
defaultBranch = "main";
};
};
};
};
};

82
modules/home/neovim.nix Normal file
View File

@ -0,0 +1,82 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.neovim;
in
{
options.custom.home.neovim = {
enable = lib.mkEnableOption "neovim";
};
config = lib.mkIf config.custom.home.neovim.enable {
home-manager.users."${config.custom.user}".programs.neovim = {
enable = true;
viAlias = true;
vimAlias = true;
plugins = with pkgs.vimPlugins; [
a-vim
dracula-nvim
telescope-nvim
];
extraLuaConfig = ''
-- Logical options
vim.opt.splitright = true
vim.opt.splitbelow = true
vim.opt.ignorecase = true
vim.opt.smartcase = true
vim.opt.expandtab = true
vim.opt.tabstop = 2
vim.opt.shiftwidth = 2
-- Appearance
vim.cmd[[colorscheme dracula-soft]]
vim.opt.number = true
vim.opt.relativenumber = true
-- Telescope
require('telescope').setup({
pickers = {
find_files = {
find_command = {
"${pkgs.fd}/bin/fd",
"--type=f",
"--strip-cwd-prefix",
"--no-require-git",
"--hidden",
"--exclude=.sl",
},
},
},
defaults = {
vimgrep_arguments = {
"${pkgs.ripgrep}/bin/rg",
"--color=never",
"--no-heading",
"--with-filename",
"--line-number",
"--column",
"--smart-case",
"--no-require-git",
"--hidden",
"--glob=!.sl",
},
},
})
-- Key bindings
vim.g.mapleader = ","
--- Key bindings: Telescope
local telescope_builtin = require('telescope.builtin')
vim.keymap.set('n', '<leader>ff', telescope_builtin.find_files, {})
vim.keymap.set('n', '<leader>fg', telescope_builtin.live_grep, {})
vim.keymap.set('n', '<leader>fb', telescope_builtin.buffers, {})
vim.keymap.set('n', '<leader>fh', telescope_builtin.help_tags, {})
'';
};
};
}

View File

@ -1,10 +1,25 @@
setw -g mouse on
# Large history
set -g history-limit 500000
# Bindings
bind C-Y set-window-option synchronize-panes
bind -n C-k clear-history
# Status pane
set -g status-right-length 100
set -g status-right "#(uname -r) • #(hostname -f | sed 's/\.ts\.hillion\.co\.uk//g') • %d-%b-%y %H:%M"
# New panes in the same directory
bind '"' split-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"
bind c new-window -c "#{pane_current_path}"
# Start indices at 1 to match keyboard
set -g base-index 1
setw -g pane-base-index 1
# Open a new session when attached to and one isn't open
# Must come after base-index settings
new-session

View File

@ -1,8 +1,17 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.tmux;
in
{
home-manager.users.jake.programs.tmux = {
enable = true;
extraConfig = lib.readFile ./.tmux.conf;
options.custom.home.tmux = {
enable = lib.mkEnableOption "tmux";
};
config = lib.mkIf cfg.enable {
home-manager.users.jake.programs.tmux = {
enable = true;
extraConfig = lib.readFile ./.tmux.conf;
};
};
}

View File

@ -6,6 +6,10 @@
## Defined System Users (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183;
chia = 185;
gitea = 186;
node-exporter = 188;
step-ca = 198;
isponsorblocktv = 199;
## Consistent People
jake = 1000;
@ -15,6 +19,10 @@
## Defined System Groups (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183;
chia = 185;
gitea = 186;
node-exporter = 188;
step-ca = 198;
isponsorblocktv = 199;
## Consistent Groups
mediaaccess = 1200;

View File

@ -2,7 +2,6 @@
let
cfg = config.custom.impermanence;
listIf = (enable: x: if enable then x else [ ]);
in
{
options.custom.impermanence = {
@ -12,6 +11,13 @@ in
type = lib.types.str;
default = "/data";
};
cache = {
enable = lib.mkEnableOption "impermanence.cache";
path = lib.mkOption {
type = lib.types.str;
default = "/cache";
};
};
users = lib.mkOption {
type = with lib.types; listOf str;
@ -40,41 +46,85 @@ in
gitea.stateDir = "${cfg.base}/system/var/lib/gitea";
};
environment.persistence."${cfg.base}/system" = {
hideMounts = true;
directories = [
"/etc/nixos"
] ++ (listIf config.custom.tailscale.enable [ "/var/lib/tailscale" ]) ++
(listIf config.services.zigbee2mqtt.enable [ config.services.zigbee2mqtt.dataDir ]) ++
(listIf config.services.postgresql.enable [ config.services.postgresql.dataDir ]) ++
(listIf config.hardware.bluetooth.enable [ "/var/lib/bluetooth" ]) ++
(listIf config.custom.services.unifi.enable [ "/var/lib/unifi" ]) ++
(listIf (config.virtualisation.oci-containers.containers != { }) [ "/var/lib/containers" ]);
custom.chia = lib.mkIf config.custom.chia.enable {
path = lib.mkOverride 999 "/data/chia";
};
services.resilio = lib.mkIf config.services.resilio.enable {
directoryRoot = lib.mkOverride 999 "${cfg.base}/sync";
};
services.plex = lib.mkIf config.services.plex.enable {
dataDir = lib.mkOverride 999 "/data/plex";
};
services.home-assistant = lib.mkIf config.services.home-assistant.enable {
configDir = lib.mkOverride 999 "/data/home-assistant";
};
environment.persistence = lib.mkMerge [
{
"${cfg.base}/system" = {
hideMounts = true;
directories = [
"/etc/nixos"
] ++ (lib.lists.optional config.services.tailscale.enable "/var/lib/tailscale") ++
(lib.lists.optional config.services.zigbee2mqtt.enable config.services.zigbee2mqtt.dataDir) ++
(lib.lists.optional config.services.postgresql.enable config.services.postgresql.dataDir) ++
(lib.lists.optional config.hardware.bluetooth.enable "/var/lib/bluetooth") ++
(lib.lists.optional config.custom.services.unifi.enable "/var/lib/unifi") ++
(lib.lists.optional (config.virtualisation.oci-containers.containers != { }) "/var/lib/containers") ++
(lib.lists.optional config.services.tang.enable "/var/lib/private/tang") ++
(lib.lists.optional config.services.caddy.enable "/var/lib/caddy") ++
(lib.lists.optional config.services.prometheus.enable "/var/lib/${config.services.prometheus.stateDir}") ++
(lib.lists.optional config.custom.services.isponsorblocktv.enable "${config.custom.services.isponsorblocktv.dataDir}") ++
(lib.lists.optional config.services.step-ca.enable "/var/lib/step-ca/db");
};
}
(lib.mkIf cfg.cache.enable {
"${cfg.cache.path}/system" = {
hideMounts = true;
directories = (lib.lists.optional config.services.postgresqlBackup.enable config.services.postgresqlBackup.location);
};
})
];
home-manager.users =
let
mkUser = (x: {
name = x;
value = {
home.persistence."/data/users/${x}" = {
allowOther = false;
mkUser = (x:
let
homeCfg = config.home-manager.users."${x}";
in
{
name = x;
value = {
home = {
persistence."/data/users/${x}" = {
allowOther = false;
files = [
".zsh_history"
] ++ cfg.userExtraFiles.${x} or [ ];
files = cfg.userExtraFiles.${x} or [ ];
directories = cfg.userExtraDirs.${x} or [ ];
};
directories = cfg.userExtraDirs.${x} or [ ];
sessionVariables = lib.attrsets.optionalAttrs homeCfg.programs.zoxide.enable { _ZO_DATA_DIR = "/data/users/${x}/.local/share/zoxide"; };
};
programs = {
zsh.history.path = lib.mkOverride 999 "/data/users/${x}/.zsh_history";
};
};
};
});
});
in
builtins.listToAttrs (builtins.map mkUser cfg.users);
systemd.tmpfiles.rules = builtins.map
systemd.tmpfiles.rules = lib.lists.flatten (builtins.map
(user:
let details = config.users.users.${user}; in "L ${details.home}/local - ${user} ${details.group} - /data/users/${user}")
cfg.users;
let details = config.users.users.${user}; in [
"d /data/users/${user} 0700 ${user} ${details.group} - -"
"L ${details.home}/local - ${user} ${details.group} - /data/users/${user}"
])
cfg.users);
};
}

View File

@ -11,28 +11,43 @@ in
};
locations = lib.mkOption {
default = {
services = {
downloads = "tywin.storage.ts.hillion.co.uk";
gitea = "jorah.cx.ts.hillion.co.uk";
mastodon = "vm.strangervm.ts.hillion.co.uk";
matrix = "jorah.cx.ts.hillion.co.uk";
unifi = "jorah.cx.ts.hillion.co.uk";
};
drone = {
server = "vm.strangervm.ts.hillion.co.uk";
};
};
readOnly = true;
};
};
config = lib.mkIf cfg.autoServe {
custom.services.downloads.enable = cfg.locations.services.downloads == config.networking.fqdn;
custom.services.gitea.enable = cfg.locations.services.gitea == config.networking.fqdn;
custom.services.mastodon.enable = cfg.locations.services.mastodon == config.networking.fqdn;
custom.services.matrix.enable = cfg.locations.services.matrix == config.networking.fqdn;
custom.services.unifi.enable = cfg.locations.services.unifi == config.networking.fqdn;
config = lib.mkMerge [
{
custom.locations.locations = {
services = {
authoritative_dns = [ "boron.cx.ts.hillion.co.uk" ];
downloads = "phoenix.st.ts.hillion.co.uk";
gitea = "boron.cx.ts.hillion.co.uk";
homeassistant = "stinger.pop.ts.hillion.co.uk";
mastodon = "";
matrix = "boron.cx.ts.hillion.co.uk";
prometheus = "boron.cx.ts.hillion.co.uk";
restic = "phoenix.st.ts.hillion.co.uk";
tang = [
"li.pop.ts.hillion.co.uk"
"microserver.home.ts.hillion.co.uk"
"sodium.pop.ts.hillion.co.uk"
];
unifi = "boron.cx.ts.hillion.co.uk";
version_tracker = [ "boron.cx.ts.hillion.co.uk" ];
};
};
}
custom.drone.server.enable = cfg.locations.drone.server == config.networking.fqdn;
};
(lib.mkIf cfg.autoServe
{
custom.services = lib.mapAttrsRecursive
(path: value: {
enable =
if builtins.isList value
then builtins.elem config.networking.fqdn value
else config.networking.fqdn == value;
})
cfg.locations.services;
})
];
}

View File

@ -0,0 +1,24 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.prometheus.client;
in
{
options.custom.prometheus.client = {
enable = lib.mkEnableOption "prometheus-client";
};
config = lib.mkIf cfg.enable {
users.users.node-exporter.uid = config.ids.uids.node-exporter;
users.groups.node-exporter.gid = config.ids.gids.node-exporter;
services.prometheus.exporters.node = {
enable = true;
port = 9000;
enabledCollectors = [
"systemd"
];
};
};
}

View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./client.nix
./service.nix
];
}

View File

@ -0,0 +1,67 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.services.prometheus;
in
{
options.custom.services.prometheus = {
enable = lib.mkEnableOption "prometheus-client";
};
config = lib.mkIf cfg.enable {
services.prometheus = {
enable = true;
globalConfig = {
scrape_interval = "15s";
};
retentionTime = "1y";
scrapeConfigs = [{
job_name = "node";
static_configs = [{
targets = builtins.map (x: "${x}:9000") (builtins.attrNames (builtins.readDir ../../hosts));
}];
}];
rules = [
''
groups:
- name: service alerting
rules:
- alert: ResilioSyncDown
expr: node_systemd_unit_state{ name = 'resilio.service', state != 'active' } > 0
for: 10m
annotations:
summary: "Resilio Sync systemd service is down"
description: "The Resilio Sync systemd service is not active on instance {{ $labels.instance }}."
''
];
};
services.caddy = {
enable = true;
virtualHosts."prometheus.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
reverse_proxy http://localhost:9090
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
'';
};
};
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
};
}

View File

@ -19,50 +19,93 @@ in
type = with lib.types; uniq (listOf attrs);
default = [ ];
};
};
config = lib.mkIf cfg.enable {
users.users =
let
mkUser =
(user: {
name = user;
value = {
extraGroups = [ "rslsync" ];
};
});
in
builtins.listToAttrs (builtins.map mkUser cfg.extraUsers);
age.secrets =
let
mkSecret = (secret: {
name = secret.name;
value = {
file = secret.file;
owner = "rslsync";
group = "rslsync";
};
});
in
builtins.listToAttrs (builtins.map (folder: mkSecret folder.secret) cfg.folders);
services.resilio = {
enable = true;
sharedFolders =
let
mkFolder = name: secret: {
directory = "${config.services.resilio.directoryRoot}/${name}";
secretFile = "${config.age.secrets."${secret.name}".path}";
knownHosts = [ ];
searchLAN = true;
useDHT = true;
useRelayServer = true;
useSyncTrash = false;
useTracker = true;
};
in
builtins.map (folder: mkFolder folder.name folder.secret) cfg.folders;
backups = {
enable = lib.mkEnableOption "resilio.backups";
};
};
config = lib.mkIf cfg.enable (lib.mkMerge [
{
users.users =
let
mkUser =
(user: {
name = user;
value = {
extraGroups = [ "rslsync" ];
};
});
in
builtins.listToAttrs (builtins.map mkUser cfg.extraUsers);
age.secrets =
let
mkSecret = (secret: {
name = secret.name;
value = {
file = secret.file;
owner = "rslsync";
group = "rslsync";
};
});
in
builtins.listToAttrs (builtins.map (folder: mkSecret folder.secret) cfg.folders);
services.resilio = {
enable = true;
deviceName = lib.mkOverride 999 (lib.strings.concatStringsSep "." (lib.lists.take 2 (lib.strings.splitString "." config.networking.fqdnOrHostName)));
storagePath = lib.mkOverride 999 "${config.services.resilio.directoryRoot}/.sync";
sharedFolders =
let
mkFolder = name: secret: {
directory = "${config.services.resilio.directoryRoot}/${name}";
secretFile = "${config.age.secrets."${secret.name}".path}";
knownHosts = [ ];
searchLAN = true;
useDHT = true;
useRelayServer = true;
useSyncTrash = false;
useTracker = true;
};
in
builtins.map (folder: mkFolder folder.name folder.secret) cfg.folders;
};
systemd.services.resilio.unitConfig.RequiresMountsFor = builtins.map (folder: "${config.services.resilio.directoryRoot}/${folder.name}") cfg.folders;
}
(lib.mkIf cfg.backups.enable {
age.secrets."resilio/restic/128G.key" = {
file = ../secrets/restic/128G.age;
owner = "rslsync";
group = "rslsync";
};
services.restic.backups."resilio" = {
repository = "rest:https://restic.ts.hillion.co.uk/128G";
user = "rslsync";
passwordFile = config.age.secrets."resilio/restic/128G.key".path;
timerConfig = {
OnBootSec = "10m";
OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m";
};
paths = [ config.services.resilio.directoryRoot ];
exclude = [
"${config.services.resilio.directoryRoot}/.sync"
"${config.services.resilio.directoryRoot}/*/.sync"
"${config.services.resilio.directoryRoot}/resources/media/films"
"${config.services.resilio.directoryRoot}/resources/media/iso"
"${config.services.resilio.directoryRoot}/resources/media/tv"
"${config.services.resilio.directoryRoot}/dad/media"
];
};
})
]);
}

22
modules/sched_ext.nix Normal file
View File

@ -0,0 +1,22 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.sched_ext;
in
{
options.custom.sched_ext = {
enable = lib.mkEnableOption "sched_ext";
};
config = lib.mkIf cfg.enable {
assertions = [{
assertion = config.boot.kernelPackages.kernelAtLeast "6.12";
message = "sched_ext requires a kernel >=6.12";
}];
boot.kernelPackages = if pkgs.linuxPackages.kernelAtLeast "6.12" then pkgs.linuxPackages else (if pkgs.linuxPackages_latest.kernelAtLeast "6.12" then pkgs.linuxPackages_latest else pkgs.unstable.linuxPackages_testing);
environment.systemPackages = with pkgs; [ unstable.scx.layered unstable.scx.lavd ];
};
}

View File

@ -0,0 +1,56 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.services.authoritative_dns;
in
{
options.custom.services.authoritative_dns = {
enable = lib.mkEnableOption "authoritative_dns";
};
config = lib.mkIf cfg.enable {
services.nsd = {
enable = true;
zones = {
"ts.hillion.co.uk" = {
data =
let
makeRecords = type: s: (lib.concatStringsSep "\n" (lib.collect builtins.isString (lib.mapAttrsRecursive (path: value: "${lib.concatStringsSep "." (lib.reverseList path)} 86400 ${type} ${value}") s)));
in
''
$ORIGIN ts.hillion.co.uk.
$TTL 86400
ts.hillion.co.uk. IN SOA ns1.hillion.co.uk. hostmaster.hillion.co.uk. (
1 ;Serial
7200 ;Refresh
3600 ;Retry
1209600 ;Expire
3600 ;Negative response caching TTL
)
86400 NS ns1.hillion.co.uk.
ca 21600 CNAME sodium.pop.ts.hillion.co.uk.
restic 21600 CNAME ${config.custom.locations.locations.services.restic}.
prometheus 21600 CNAME ${config.custom.locations.locations.services.prometheus}.
deluge.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
prowlarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
radarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
sonarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
graphs.router.home 21600 CNAME router.home.ts.hillion.co.uk.
zigbee2mqtt.home 21600 CNAME router.home.ts.hillion.co.uk.
charlie.kvm 21600 CNAME router.home.ts.hillion.co.uk.
hammer.kvm 21600 CNAME router.home.ts.hillion.co.uk.
'' + (makeRecords "A" config.custom.dns.authoritative.ipv4.uk.co.hillion.ts) + "\n\n" + (makeRecords "AAAA" config.custom.dns.authoritative.ipv6.uk.co.hillion.ts);
};
};
};
};
}

View File

@ -0,0 +1,18 @@
{ config, lib, ... }:
{
imports = [
./authoritative_dns.nix
./downloads.nix
./gitea/default.nix
./homeassistant.nix
./isponsorblocktv.nix
./mastodon/default.nix
./matrix.nix
./restic.nix
./tang.nix
./unifi.nix
./version_tracker.nix
./zigbee2mqtt.nix
];
}

View File

@ -24,27 +24,33 @@ in
};
config = lib.mkIf cfg.enable {
services.caddy = {
enable = true;
virtualHosts = builtins.listToAttrs (builtins.map
(x: {
name = "http://${x}.downloads.ts.hillion.co.uk";
value = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
extraConfig = "reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock";
};
}) [ "prowlarr" "sonarr" "radarr" "deluge" ]);
};
## Wireguard
age.secrets."wireguard/downloads".file = ../../secrets/wireguard/downloads.age;
age.secrets."deluge/auth" = {
file = ../../secrets/deluge/auth.age;
owner = "deluge";
};
services.caddy = {
enable = true;
virtualHosts = builtins.listToAttrs (builtins.map
(x: {
name = "${x}.downloads.ts.hillion.co.uk";
value = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
'';
};
}) [ "prowlarr" "sonarr" "radarr" "deluge" ]);
};
## Wireguard
networking.wireguard.interfaces."downloads" = {
privateKeyFile = config.age.secrets."wireguard/downloads".path;
ips = [ "10.2.0.2/32" ];
@ -132,7 +138,10 @@ in
script = with pkgs; "${iproute2}/bin/ip link set up lo";
};
networking.hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; };
networking = {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; };
};
services = {
prowlarr.enable = true;
@ -149,6 +158,7 @@ in
deluge = {
enable = true;
web.enable = true;
group = "mediaaccess";
dataDir = "/var/lib/deluge";
authFile = "/run/agenix/deluge/auth";
@ -157,11 +167,18 @@ in
config = {
download_location = "/media/downloads";
max_connections_global = 1024;
max_upload_speed = 12500;
max_download_speed = 25000;
max_active_seeding = 192;
max_active_downloading = 64;
max_active_limit = 256;
dont_count_slow_torrents = true;
stop_seed_at_ratio = true;
stop_seed_ratio = 2;
share_ratio_limit = 2;
enabled_plugins = [ "Label" ];
};
};

View File

@ -0,0 +1,105 @@
{ config, lib, pkgs, ... }:
let
cfg = config.custom.services.gitea.actions;
in
{
options.custom.services.gitea.actions = {
enable = lib.mkEnableOption "gitea-actions";
labels = lib.mkOption {
type = with lib.types; listOf str;
default = [
"ubuntu-latest:docker://node:16-bullseye"
"ubuntu-20.04:docker://node:16-bullseye"
];
};
tokenSecret = lib.mkOption {
type = lib.types.path;
};
};
config = lib.mkIf cfg.enable {
age.secrets."gitea/actions/token".file = cfg.tokenSecret;
# Run gitea-actions in a container and firewall it such that it can only
# access the Internet (not private networks).
containers."gitea-actions" = {
autoStart = true;
ephemeral = true;
privateNetwork = true; # all traffic goes through ve-gitea-actions on the host
hostAddress = "10.108.27.1";
localAddress = "10.108.27.2";
extraFlags = [
# Extra system calls required to nest Docker, taken from https://wiki.archlinux.org/title/systemd-nspawn
"--system-call-filter=add_key"
"--system-call-filter=keyctl"
"--system-call-filter=bpf"
];
bindMounts = let tokenPath = config.age.secrets."gitea/actions/token".path; in {
"${tokenPath}".hostPath = tokenPath;
};
timeoutStartSec = "5min";
config = (hostConfig: ({ config, pkgs, ... }: {
config = let cfg = hostConfig.custom.services.gitea.actions; in {
system.stateVersion = "23.11";
virtualisation.docker.enable = true;
services.gitea-actions-runner.instances.container = {
enable = true;
url = "https://gitea.hillion.co.uk";
tokenFile = hostConfig.age.secrets."gitea/actions/token".path;
name = "${hostConfig.networking.hostName}";
labels = cfg.labels;
settings = {
runner = {
capacity = 3;
};
cache = {
enabled = true;
host = "10.108.27.2";
port = 41919;
};
};
};
# Drop any packets to private networks
networking = {
firewall.enable = lib.mkForce false;
nftables = {
enable = true;
ruleset = ''
table inet filter {
chain output {
type filter hook output priority 100; policy accept;
ct state { established, related } counter accept
ip daddr 10.0.0.0/8 drop
ip daddr 100.64.0.0/10 drop
ip daddr 172.16.0.0/12 drop
ip daddr 192.168.0.0/16 drop
}
}
'';
};
};
};
})) config;
};
networking.nat = {
enable = true;
externalInterface = "eth0";
internalIPs = [ "10.108.27.2" ];
};
};
}

View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./actions.nix
./gitea.nix
];
}

View File

@ -1,4 +1,4 @@
{ config, pkgs, lib, nixpkgs-unstable, ... }:
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.gitea;
@ -20,39 +20,42 @@ in
config = lib.mkIf cfg.enable {
age.secrets = {
"gitea/mailer_password" = {
file = ../../secrets/gitea/mailer_password.age;
file = ../../../secrets/gitea/mailer_password.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
};
"gitea/oauth_jwt_secret" = {
file = ../../secrets/gitea/oauth_jwt_secret.age;
file = ../../../secrets/gitea/oauth_jwt_secret.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/oauth2_jwt_secret";
};
"gitea/lfs_jwt_secret" = {
file = ../../secrets/gitea/lfs_jwt_secret.age;
file = ../../../secrets/gitea/lfs_jwt_secret.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/lfs_jwt_secret";
};
"gitea/security_secret_key" = {
file = ../../secrets/gitea/security_secret_key.age;
file = ../../../secrets/gitea/security_secret_key.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/secret_key";
};
"gitea/security_internal_token" = {
file = ../../secrets/gitea/security_internal_token.age;
file = ../../../secrets/gitea/security_internal_token.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/internal_token";
};
};
users.users.gitea.uid = config.ids.uids.gitea;
users.groups.gitea.gid = config.ids.gids.gitea;
services.gitea = {
enable = true;
package = nixpkgs-unstable.legacyPackages.x86_64-linux.gitea;
package = pkgs.unstable.gitea;
mailerPasswordFile = config.age.secrets."gitea/mailer_password".path;
appName = "Hillion Gitea";
@ -79,7 +82,7 @@ in
mailer = {
ENABLED = true;
HOST = "smtp.mailgun.org:587";
SMTP_ADDR = "smtp.mailgun.org:587";
FROM = "gitea@mg.hillion.co.uk";
USER = "gitea@mg.hillion.co.uk";
};
@ -89,7 +92,7 @@ in
service = {
REGISTER_EMAIL_CONFIRM = true;
ENABLE_NOTIFY_MAIL = true;
EMAIL_DOMAIN_WHITELIST = "hillion.co.uk,cam.ac.uk,cl.cam.ac.uk";
EMAIL_DOMAIN_ALLOWLIST = "hillion.co.uk,cam.ac.uk,cl.cam.ac.uk";
};
session = {
PROVIDER = "file";
@ -97,18 +100,14 @@ in
};
};
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = 1;
"net.ipv6.conf.all.forwarding" = 1;
};
networking.firewall.extraCommands = ''
# proxy all traffic on public interface to the gitea SSH server
iptables -A PREROUTING -t nat -i enp5s0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A PREROUTING -t nat -i enp5s0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
# proxy locally originating outgoing packets
iptables -A OUTPUT -d 95.217.229.104 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A OUTPUT -d 2a01:4f9:4b:3953::2 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
iptables -A OUTPUT -d 138.201.252.214 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A OUTPUT -d 2a01:4f8:173:23d2::2 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
'';
};
}

View File

@ -0,0 +1,182 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.homeassistant;
in
{
options.custom.services.homeassistant = {
enable = lib.mkEnableOption "homeassistant";
backup = lib.mkOption {
default = true;
type = lib.types.bool;
};
};
config = lib.mkIf cfg.enable {
custom = {
backups.homeassistant.enable = cfg.backup;
};
age.secrets."homeassistant/secrets.yaml" = {
file = ../../secrets/homeassistant/secrets.yaml.age;
path = "${config.services.home-assistant.configDir}/secrets.yaml";
owner = "hass";
group = "hass";
};
services = {
postgresql = {
enable = true;
initialScript = pkgs.writeText "homeassistant-init.sql" ''
CREATE ROLE "hass" WITH LOGIN;
CREATE DATABASE "homeassistant" WITH OWNER "hass" ENCODING "utf8";
'';
};
home-assistant = {
enable = true;
extraPackages = python3Packages: with python3Packages; [
psycopg2 # postgresql support
];
extraComponents = [
"bluetooth"
"default_config"
"esphome"
"fully_kiosk"
"google_assistant"
"homekit"
"met"
"mobile_app"
"mqtt"
"otp"
"smartthings"
"sonos"
"sun"
"switchbot"
"waze_travel_time"
];
customComponents = with pkgs.home-assistant-custom-components; [
adaptive_lighting
];
config = {
default_config = { };
homeassistant = {
auth_providers = [
{ type = "homeassistant"; }
{
type = "trusted_networks";
trusted_networks = [ "10.239.19.4/32" ];
trusted_users = {
"10.239.19.4" = "fb4979873ecb480d9e3bb336250fa344";
};
allow_bypass_login = true;
}
];
};
recorder = {
db_url = "postgresql://@/homeassistant";
};
http = {
use_x_forwarded_for = true;
trusted_proxies = with config.custom.dns.authoritative; [
ipv4.uk.co.hillion.ts.cx.boron
ipv6.uk.co.hillion.ts.cx.boron
ipv4.uk.co.hillion.ts.pop.sodium
ipv6.uk.co.hillion.ts.pop.sodium
];
};
google_assistant = {
project_id = "homeassistant-8de41";
service_account = {
client_email = "!secret google_assistant_service_account_client_email";
private_key = "!secret google_assistant_service_account_private_key";
};
report_state = true;
expose_by_default = true;
exposed_domains = [ "light" ];
entity_config = {
"input_boolean.sleep_mode" = { };
};
};
homekit = [{
filter = {
include_domains = [ "light" ];
};
}];
bluetooth = { };
adaptive_lighting = {
lights = [
"light.bedroom_lamp"
"light.bedroom_light"
"light.cubby_light"
"light.desk_lamp"
"light.hallway_light"
"light.living_room_lamp"
"light.living_room_light"
"light.wardrobe_light"
];
min_sunset_time = "21:00";
};
light = [
{
platform = "template";
lights = {
bathroom_light = {
unique_id = "87a4cbb5-e5a7-44fd-9f28-fec2d6a62538";
value_template = "{{ false if state_attr('script.bathroom_light_switch_if_on', 'last_triggered') > states.sensor.bathroom_motion_sensor_illuminance_lux.last_reported else states('sensor.bathroom_motion_sensor_illuminance_lux') | int > 500 }}";
turn_on = { service = "script.noop"; };
turn_off = { service = "script.bathroom_light_switch_if_on"; };
};
};
}
];
sensor = [
{
# Time/Date (for automations)
platform = "time_date";
display_options = [
"date"
"date_time_iso"
];
}
{
# Living Room Temperature
platform = "statistics";
name = "Living Room temperature (rolling average)";
entity_id = "sensor.living_room_environment_sensor_temperature";
state_characteristic = "average_linear";
unique_id = "e86198a8-88f4-4822-95cb-3ec7b2662395";
max_age = {
minutes = 5;
};
}
];
input_boolean = {
sleep_mode = {
name = "Set house to sleep mode";
icon = "mdi:sleep";
};
};
# UI managed expansions
automation = "!include automations.yaml";
script = "!include scripts.yaml";
scene = "!include scenes.yaml";
};
};
};
};
}

View File

@ -0,0 +1,62 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.isponsorblocktv;
ver = "v2.2.1";
ctl = pkgs.writeScriptBin "isponsorblocktv-config" ''
#! ${pkgs.runtimeShell}
set -e
sudo systemctl stop podman-isponsorblocktv
sudo ${pkgs.podman}/bin/podman run \
--rm -it \
--uidmap=0:${toString config.users.users.isponsorblocktv.uid}:1 \
--gidmap=0:${toString config.users.groups.isponsorblocktv.gid}:1 \
-v ${cfg.dataDir}:/app/data \
ghcr.io/dmunozv04/isponsorblocktv:${ver} \
--setup-cli
sudo systemctl start podman-isponsorblocktv
'';
in
{
options.custom.services.isponsorblocktv = {
enable = lib.mkEnableOption "isponsorblocktv";
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/isponsorblocktv";
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ ctl ];
users.groups.isponsorblocktv = {
gid = config.ids.gids.isponsorblocktv;
};
users.users.isponsorblocktv = {
home = cfg.dataDir;
createHome = true;
isSystemUser = true;
group = "isponsorblocktv";
uid = config.ids.uids.isponsorblocktv;
};
virtualisation.oci-containers.containers.isponsorblocktv = {
image = "ghcr.io/dmunozv04/isponsorblocktv:${ver}";
extraOptions = [
"--uidmap=0:${toString config.users.users.isponsorblocktv.uid}:1"
"--gidmap=0:${toString config.users.groups.isponsorblocktv.gid}:1"
];
volumes = [ "${cfg.dataDir}:/app/data" ];
};
systemd.tmpfiles.rules = [
"d ${cfg.dataDir} 0700 isponsorblocktv isponsorblocktv - -"
];
};
}

View File

@ -32,6 +32,8 @@ in
};
};
users.users.caddy.extraGroups = [ "mastodon" ];
services = {
mastodon = {
enable = true;

View File

@ -41,6 +41,10 @@ in
owner = "matrix-synapse";
group = "matrix-synapse";
};
"matrix/matrix.hillion.co.uk/syncv3_secret" = {
file = ../../secrets/matrix/matrix.hillion.co.uk/syncv3_secret.age;
};
};
services = {
@ -76,8 +80,8 @@ in
x_forwarded = true;
bind_addresses = [
"::1"
config.custom.tailscale.ipv4Addr
config.custom.tailscale.ipv6Addr
config.custom.dns.tailscale.ipv4
config.custom.dns.tailscale.ipv6
];
resources = [
{
@ -114,6 +118,15 @@ in
};
};
matrix-sliding-sync = {
enable = true;
environmentFile = config.age.secrets."matrix/matrix.hillion.co.uk/syncv3_secret".path;
settings = {
SYNCV3_SERVER = "https://matrix.hillion.co.uk";
SYNCV3_BINDADDR = "[::]:8009";
};
};
heisenbridge = lib.mkIf cfg.heisenbridge {
enable = true;
owner = "@jake:hillion.co.uk";

306
modules/services/restic.nix Normal file
View File

@ -0,0 +1,306 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.restic;
in
{
options.custom.services.restic = {
enable = lib.mkEnableOption "restic http server";
path = lib.mkOption {
type = lib.types.path;
default = "/var/lib/restic";
};
repos = lib.mkOption {
readOnly = true;
type = with lib.types; attrsOf (submodule {
options = {
path = lib.mkOption {
default = null;
type = nullOr str;
};
passwordFile = lib.mkOption {
default = null;
type = nullOr str;
};
environmentFile = lib.mkOption {
default = null;
type = nullOr str;
};
forgetConfig = lib.mkOption {
default = null;
type = nullOr (submodule {
options = {
timerConfig = lib.mkOption {
type = attrs;
};
opts = lib.mkOption {
type = listOf str;
};
};
});
};
clones = lib.mkOption {
default = [ ];
type = listOf (submodule {
options = {
timerConfig = lib.mkOption {
type = attrs;
};
repo = lib.mkOption {
type = str;
};
};
});
};
};
});
default = {
"128G" = {
path = "${cfg.path}/128G";
passwordFile = config.age.secrets."restic/128G.key".path;
forgetConfig = {
timerConfig = {
OnCalendar = "02:30";
RandomizedDelaySec = "1h";
};
opts = [
"--keep-last 48"
"--keep-within-hourly 7d"
"--keep-within-daily 1m"
"--keep-within-weekly 6m"
"--keep-within-monthly 24m"
];
};
clones = [
{
repo = "128G-wasabi";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
{
repo = "128G-backblaze";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
];
};
"1.6T" = {
path = "${cfg.path}/1.6T";
passwordFile = config.age.secrets."restic/1.6T.key".path;
forgetConfig = {
timerConfig = {
OnCalendar = "Wed, 02:30";
RandomizedDelaySec = "4h";
};
opts = [
"--keep-within-daily 14d"
"--keep-within-weekly 2m"
"--keep-within-monthly 18m"
];
};
clones = [
{
repo = "1.6T-wasabi";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
{
repo = "1.6T-backblaze";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
];
};
"128G-wasabi" = {
environmentFile = config.age.secrets."restic/128G-wasabi.env".path;
};
"1.6T-wasabi" = {
environmentFile = config.age.secrets."restic/1.6T-wasabi.env".path;
};
"128G-backblaze" = {
environmentFile = config.age.secrets."restic/128G-backblaze.env".path;
};
"1.6T-backblaze" = {
environmentFile = config.age.secrets."restic/1.6T-backblaze.env".path;
};
};
};
};
config = lib.mkIf cfg.enable {
age.secrets = {
"restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "restic";
group = "restic";
};
"restic/128G-wasabi.env".file = ../../secrets/restic/128G-wasabi.env.age;
"restic/128G-backblaze.env".file = ../../secrets/restic/128G-backblaze.env.age;
"restic/1.6T.key" = {
file = ../../secrets/restic/1.6T.age;
owner = "restic";
group = "restic";
};
"restic/1.6T-wasabi.env".file = ../../secrets/restic/1.6T-wasabi.env.age;
"restic/1.6T-backblaze.env".file = ../../secrets/restic/1.6T-backblaze.env.age;
};
services.restic.server = {
enable = true;
appendOnly = true;
extraFlags = [ "--no-auth" ];
dataDir = cfg.path;
listenAddress = "127.0.0.1:8000"; # TODO: can this be a Unix socket?
};
services.caddy = {
enable = true;
virtualHosts."restic.ts.hillion.co.uk".extraConfig = ''
bind ${config.custom.dns.tailscale.ipv4} ${config.custom.dns.tailscale.ipv6}
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://localhost:8000
'';
};
systemd =
let
mkRepoInfo = repo_cfg: (if (repo_cfg.passwordFile != null) then {
serviceConfig.LoadCredential = [
"password_file:${repo_cfg.passwordFile}"
];
environment = {
RESTIC_REPOSITORY = repo_cfg.path;
RESTIC_PASSWORD_FILE = "%d/password_file";
};
} else {
serviceConfig.EnvironmentFile = repo_cfg.environmentFile;
});
mkForgetService = name: repo_cfg:
if (repo_cfg.forgetConfig != null) then
({
description = "Restic forget service for ${name}";
serviceConfig = {
User = "restic";
Group = "restic";
};
script = ''
set -xe
${pkgs.restic}/bin/restic forget ${lib.strings.concatStringsSep " " repo_cfg.forgetConfig.opts} \
--prune \
--retry-lock 30m
'';
} // (mkRepoInfo repo_cfg)) else { };
mkForgetTimer = repo_cfg:
if (repo_cfg.forgetConfig != null) then {
wantedBy = [ "timers.target" ];
timerConfig = repo_cfg.forgetConfig.timerConfig;
} else { };
mkCloneService = from_repo: clone_cfg: to_repo: {
name = "restic-clone-${from_repo.name}-${to_repo.name}";
value = lib.mkMerge [
{
description = "Restic copy from ${from_repo.name} to ${to_repo.name}";
serviceConfig = {
User = "restic";
Group = "restic";
LoadCredential = [
"from_password_file:${from_repo.cfg.passwordFile}"
];
};
environment = {
RESTIC_FROM_PASSWORD_FILE = "%d/from_password_file";
};
script = ''
set -xe
${pkgs.restic}/bin/restic copy \
--from-repo ${from_repo.cfg.path} \
--retry-lock 30m
'';
}
(mkRepoInfo to_repo.cfg)
];
};
mkCloneTimer = from_repo: clone_cfg: to_repo: {
name = "restic-clone-${from_repo.name}-${to_repo.name}";
value = {
wantedBy = [ "timers.target" ];
timerConfig = clone_cfg.timerConfig;
};
};
mapClones = fn: builtins.listToAttrs (lib.lists.flatten (lib.mapAttrsToList
(
from_repo_name: from_repo_cfg: (builtins.map
(
clone_cfg: (fn
{ name = from_repo_name; cfg = from_repo_cfg; }
clone_cfg
{ name = clone_cfg.repo; cfg = cfg.repos."${clone_cfg.repo}"; }
)
)
from_repo_cfg.clones)
)
cfg.repos));
in
{
services = {
caddy = {
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
}
// lib.mapAttrs' (name: value: lib.attrsets.nameValuePair ("restic-forget-" + name) (mkForgetService name value)) cfg.repos
// mapClones mkCloneService;
timers = lib.mapAttrs' (name: value: lib.attrsets.nameValuePair ("restic-forget-" + name) (mkForgetTimer value)) cfg.repos
// mapClones mkCloneTimer;
};
};
}

23
modules/services/tang.nix Normal file
View File

@ -0,0 +1,23 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.tang;
in
{
options.custom.services.tang = {
enable = lib.mkEnableOption "tang";
};
config = lib.mkIf cfg.enable {
services.tang = {
enable = true;
ipAddressAllow = [
"138.201.252.214/32"
"10.64.50.26/32"
"10.64.50.27/32"
"10.64.50.28/32"
"10.64.50.29/32"
];
};
};
}

View File

@ -10,20 +10,14 @@ in
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/unifi";
readOnly = true; # NixOS module only supports this directory
};
};
config = lib.mkIf cfg.enable {
users.users.unifi = {
uid = config.ids.uids.unifi;
isSystemUser = true;
group = "unifi";
description = "UniFi controller daemon user";
home = "${cfg.dataDir}";
};
users.groups.unifi = {
gid = config.ids.gids.unifi;
};
# Fix dynamically allocated user and group ids
users.users.unifi.uid = config.ids.uids.unifi;
users.groups.unifi.gid = config.ids.gids.unifi;
services.caddy = {
enable = true;
@ -38,21 +32,9 @@ in
};
};
virtualisation.oci-containers.containers = {
"unifi" = {
image = "lscr.io/linuxserver/unifi-controller:8.0.24-ls221";
environment = {
PUID = toString config.ids.uids.unifi;
PGID = toString config.ids.gids.unifi;
TZ = "Etc/UTC";
};
volumes = [ "${cfg.dataDir}:/config" ];
ports = [
"8080:8080"
"8443:8443"
"3478:3478/udp"
];
};
services.unifi = {
enable = true;
unifiPackage = pkgs.unifi8;
};
};
}

View File

@ -23,7 +23,7 @@ in
enable = true;
virtualHosts."http://zigbee2mqtt.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = "reverse_proxy http://127.0.0.1:15606";
};
};
@ -75,7 +75,7 @@ in
};
services.restic.backups."zigbee2mqtt" = lib.mkIf cfg.backup {
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/1.6T";
repository = "rest:https://restic.ts.hillion.co.uk/1.6T";
user = "zigbee2mqtt";
passwordFile = config.age.secrets."resilio/zigbee2mqtt/1.6T.key".path;

View File

@ -1,7 +1,20 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.shell;
in
{
config = {
imports = [
./update_scripts.nix
];
options.custom.shell = {
enable = lib.mkEnableOption "shell";
};
config = lib.mkIf cfg.enable {
custom.shell.update_scripts.enable = true;
users.defaultUserShell = pkgs.zsh;
environment.systemPackages = with pkgs; [ direnv ];

View File

@ -0,0 +1,64 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.shell.update_scripts;
update = pkgs.writeScriptBin "update" ''
#! ${pkgs.runtimeShell}
set -e
if [[ $EUID -ne 0 ]]; then
exec sudo ${pkgs.runtimeShell} "$0" "$@"
fi
if [ -n "$1" ]; then
BRANCH=$1
else
BRANCH=main
fi
cd /etc/nixos
if [ "$BRANCH" = "main" ]; then
${pkgs.git}/bin/git switch $BRANCH
${pkgs.git}/bin/git pull
else
${pkgs.git}/bin/git fetch
${pkgs.git}/bin/git switch --detach origin/$BRANCH
fi
if ! ${pkgs.nixos-rebuild}/bin/nixos-rebuild --flake "/etc/nixos#${config.networking.fqdn}" test; then
echo "WARNING: \`nixos-rebuild test' failed!"
fi
while true; do
read -p "Do you want to boot this configuration? " yn
case $yn in
[Yy]* ) break;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
${pkgs.nixos-rebuild}/bin/nixos-rebuild --flake "/etc/nixos#${config.networking.fqdn}" boot
while true; do
read -p "Would you like to reboot now? " yn
case $yn in
[Yy]* ) reboot;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
'';
in
{
options.custom.shell.update_scripts = {
enable = lib.mkEnableOption "update_scripts";
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [
update
];
};
}

View File

@ -1,25 +0,0 @@
{ config, pkgs, lib, ... }:
{
config.age.secrets."spotify/11132032266" = {
file = ../../secrets/spotify/11132032266.age;
owner = "jake";
};
config.hardware.pulseaudio.enable = true;
config.users.users.jake.extraGroups = [ "audio" ];
config.users.users.jake.packages = with pkgs; [ spotify-tui ];
config.home-manager.users.jake.services.spotifyd = {
enable = true;
settings = {
global = {
username = "11132032266";
password_cmd = "cat ${config.age.secrets."spotify/11132032266".path}";
backend = "pulseaudio";
};
};
};
}

56
modules/ssh/default.nix Normal file
View File

@ -0,0 +1,56 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.ssh;
in
{
options.custom.ssh = {
enable = lib.mkEnableOption "ssh";
};
config = lib.mkIf cfg.enable {
users.users =
if config.custom.user == "jake" then {
"jake".openssh.authorizedKeys.keys = [
"sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBBwJH4udKNvi9TjOBgkxpBBy7hzWqmP0lT5zE9neusCpQLIiDhr6KXYMPXWXdZDc18wH1OLi2+639dXOvp8V/wgAAAAEc3NoOg== jake@beryllium-keys"
"sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBPPJtW19jOaUsjmxc0+QibaLJ3J3yxPXSXZXwKT0Ean6VeaH5G8zG+zjt1Y6sg2d52lHgrRfeVl1xrG/UGX8qWoAAAAEc3NoOg== jakehillion@jakehillion-mbp"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo"
];
} else { };
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHosts = {
# Global Internet hosts
"ssh.gitea.hillion.co.uk".publicKey = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==";
# Tailscale hosts
"boron.cx.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtcJ7HY/vjtheMV8EN2wlTw1hU53CJebGIeRJcSkzt5";
"be.lt.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILV3OSUT+cqFqrFHZGfn7/xi5FW3n1qjUFy8zBbYs2Sm";
"dancefloor.dancefloor.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY";
"gendry.jakehillion.terminals.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c";
"homeassistant.homeassistant.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux";
"li.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQWgcDFL9UZBDKHPiEGepT1Qsc4gz3Pee0/XVHJ6V6u";
"microserver.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw";
"router.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu";
"sodium.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQmG7v/XrinPmkTU2eIoISuU3+hoV4h60Bmbwd+xDjr";
"theon.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN59psLVu3/sQORA4x3p8H3ei8MCQlcwX5T+k3kBeBMf";
};
programs.ssh.knownHostsFiles = [ ./github_known_hosts ];
};
}

View File

@ -1,65 +0,0 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.tailscale;
in
{
options.custom.tailscale = {
enable = lib.mkEnableOption "tailscale";
preAuthKeyFile = lib.mkOption {
type = lib.types.str;
};
advertiseRoutes = lib.mkOption {
type = with lib.types; listOf str;
default = [ ];
};
advertiseExitNode = lib.mkOption {
type = lib.types.bool;
default = false;
};
ipv4Addr = lib.mkOption { type = lib.types.str; };
ipv6Addr = lib.mkOption { type = lib.types.str; };
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.tailscale ];
services.tailscale.enable = true;
networking.firewall.checkReversePath = lib.mkIf cfg.advertiseExitNode "loose";
systemd.services.tailscale-autoconnect = {
description = "Automatic connection to Tailscale";
# make sure tailscale is running before trying to connect to tailscale
after = [ "network-pre.target" "tailscale.service" ];
wants = [ "network-pre.target" "tailscale.service" ];
wantedBy = [ "multi-user.target" ];
# set this service as a oneshot job
serviceConfig.Type = "oneshot";
# have the job run this shell script
script = with pkgs; ''
# wait for tailscaled to settle
sleep 2
# check if we are already authenticated to tailscale
status="$(${tailscale}/bin/tailscale status -json | ${jq}/bin/jq -r .BackendState)"
if [ $status = "Running" ]; then # if so, then do nothing
exit 0
fi
# otherwise authenticate with tailscale
${tailscale}/bin/tailscale up \
--authkey "$(<${cfg.preAuthKeyFile})" \
--advertise-routes "${lib.concatStringsSep "," cfg.advertiseRoutes}" \
--advertise-exit-node=${if cfg.advertiseExitNode then "true" else "false"}
'';
};
};
}

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDGTCCAsCgAwIBAgIUMOkPfgLpbA08ovrPt+deXQPpA9kwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ0MDBaFw0zOTA0MTAyMTQ0MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABNweW8IgrXj7Q64RxyK8s9XpbxJ8TbYVv7NALbWUahlT
QPlGX/5XoM3Z5AtISBi1irLEy5o6mx7ebNK4NmwzNlCjggEkMIIBIDAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFMy3oz9l3bwpjgtx6IqL9IH90PXcMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAdBgNV
HREEFjAUghJibG9nLmhpbGxpb24uY28udWswPAYDVR0fBDUwMzAxoC+gLYYraHR0
cDovL2NybC5jbG91ZGZsYXJlLmNvbS9vcmlnaW5fZWNjX2NhLmNybDAKBggqhkjO
PQQDAgNHADBEAiAgVRgo5V09uyMbz1Mevmxe6d2K5xvZuBElVYja/Rf99AIgZkm1
wHEq9wqVYP0oWTiEYQZ6dzKoSwxviOEZI+ttQRA=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDHDCCAsGgAwIBAgIUMHdmb+Ef9YvVmCtliDhg1gDGt8cwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ1MDBaFw0zOTA0MTAyMTQ1MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABGn2vImTE+gpWx/0ELXue7cL0eGb+I2c9VbUYcy3TBJi
G7S+wl79MBM5+5G0wKhTpBgVpXu1/NHunfM97LGZb5ejggElMIIBITAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFI6dxFPItIKnNN7/xczMOtlTytuvMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAeBgNV
HREEFzAVghNnaXRlYS5oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0
dHA6Ly9jcmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZI
zj0EAwIDSQAwRgIhAKfRSEKCGNY5x4zUNzOy6vfxgDYPfkP6iW5Ha4gNmE+QAiEA
nTsGKr2EoqEdPtnB+wVrYMblWF7/or3JpRYGs6zD2FU=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDFDCCArugAwIBAgIUedwIJx096VH/KGDgpAKK/Q8jGWUwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTIzMDBaFw0zOTA0MTAyMTIzMDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABIdc0hnQQP7tLADaCGXxZ+1BGbZ8aow/TtHl+aXDbN3t
2vVV2iLmsMbiPcJZ5e9Q2M27L8fZ0uPJP19dDvvN97SjggEfMIIBGzAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFJilRKL8wXskL/LmgH8BnIvLIpkEMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAYBgNV
HREEETAPgg1oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9j
cmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZIzj0EAwID
RwAwRAIgbexSqkt3pzCpnpqYXwC5Gmt+nG5OEqETQ6690kpIS74CIFQI3zXlx8zk
GB0BlaZdrraAQP7AuI8CcMd5vbQdnldY
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More