Compare commits

...

174 Commits
vms ... main

Author SHA1 Message Date
390bdaaf51 resilio: update to unstable module
All checks were successful
flake / flake (push) Successful in 2m19s
Currently this pins `rslsync`'s group ID using https://github.com/NixOS/nixpkgs/pull/350055
2024-11-09 21:03:56 +00:00
ba9d54ddab chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m7s
2024-11-09 15:20:26 +00:00
843802bcb7 backups: include more git repos
All checks were successful
flake / flake (push) Successful in 1m45s
2024-11-08 12:23:54 +00:00
a07c493802 stinger: update firewall for homeassistant
All checks were successful
flake / flake (push) Successful in 1m47s
2024-11-06 20:12:59 +00:00
3a2d6f4e2e stinger: enable bluetooth
All checks were successful
flake / flake (push) Successful in 1m35s
2024-11-06 10:34:33 +00:00
a383e013c6 homeassistant: microserver.home -> stinger.pop
All checks were successful
flake / flake (push) Successful in 2m0s
2024-11-06 01:36:14 +00:00
ed3b9019f2 homeassistant: backup database
All checks were successful
flake / flake (push) Successful in 1m36s
2024-11-06 01:05:52 +00:00
a3fd10be31 stinger: init host
All checks were successful
flake / flake (push) Successful in 1m36s
2024-11-05 22:10:12 +00:00
79a3c62924 defaults: enable all firmware
All checks were successful
flake / flake (push) Successful in 1m37s
2024-11-05 22:10:01 +00:00
0761162e34 chore(deps): update determinatesystems/nix-installer-action action to v15
All checks were successful
flake / flake (push) Successful in 1m31s
2024-11-04 23:01:03 +00:00
2999a5f744 merlin: init host
All checks were successful
flake / flake (push) Successful in 1m29s
2024-11-04 22:35:55 +00:00
5146d6cd6f lavd: finish cleanup
All checks were successful
flake / flake (push) Successful in 1m27s
2024-11-03 18:36:01 +00:00
3ebba9d7a5 home: enable zoxide
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-31 23:00:45 +00:00
1b5f342aab home-manager: enable ssh-agent
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-31 22:32:57 +00:00
87d311dabe sched_ext: switch to unstable for packages
All checks were successful
flake / flake (push) Successful in 2m3s
2024-10-31 21:59:09 +00:00
0cf7aa1760 tang: remove tywin ip
All checks were successful
flake / flake (push) Successful in 1m24s
Missed this when cleaning up. We should probably get these static IPs from
authoritative DNS like Tailscale IPs, then they wouldn't have been missed. We
can then construct the static IP mappings from this, moving some stuff out of
router/default.nix.
2024-10-29 23:35:20 +00:00
363b8fe3c0 tywin.storage: delete
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-29 23:24:19 +00:00
ca57201ad5 tmux: increase history-limit
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-29 22:54:30 +00:00
32de6b05be tmux: add kernel rev and extended hostname to status-right
All checks were successful
flake / flake (push) Successful in 1m31s
2024-10-29 22:48:12 +00:00
0149d53da2 restic: backup to backblaze
All checks were successful
flake / flake (push) Successful in 1m33s
2024-10-27 21:24:20 +00:00
c7efa1fad4 restic: backup to wasabi
Some checks failed
flake / flake (push) Has been cancelled
2024-10-27 20:09:45 +00:00
dbc2931052 restic: split out common behaviour
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-27 15:57:07 +00:00
817cc3f356 phoenix: temporarily add a password to debug boot issues
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-27 15:37:52 +00:00
c33d5c2edd secrets: re-encrypt with boron user key
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-27 00:43:47 +01:00
187c15b5ab backups: update scripts for new host/path
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-26 23:47:34 +01:00
fc1fb7b528 sapling: set default merge style to :merge3
All checks were successful
flake / flake (push) Successful in 1m31s
2024-10-26 19:26:48 +01:00
caa3128310 home-manager: pass through nixos stateVersion if >24.05
All checks were successful
flake / flake (push) Successful in 1m27s
home-manager currently has a pinned stateVersion on all hosts, even though many
of the hosts were initialised after that point. Create a condition such that
any hosts initialised after 24.05 (the latest currently host) will use that
version in home-manager instead of pinning to 22.11.

Any future users can pass the stateVersion through without the `if`.

Test plan:
```
# system.stateVersion = "23.11";
$ nix eval '.#nixosConfigurations."boron.cx.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
"22.11"
```
```
# system.stateVersion = "24.05";
$ nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
"22.11"
```
```
# system.stateVersion = "24.11"; // no-commit change
$ nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
nix eval '.#nixosConfigurations."phoenix.st.ts.hillion.co.uk".config.home-manager.users.root.home.stateVersion'
error:
       ...
       (stack trace truncated; use '--show-trace' to show the full trace)

       error: A definition for option `home-manager.users.root.home.stateVersion' is not of type `one of "18.09", "19.03", "19.09", "20.03", "20.09", "21.03", "21.05", "21.11", "22.05", "22.11", "23.05", "23.11", "24.05"'. Definition values:
       - In `/nix/store/8dhsknmlnv571bg100j9v9yqq1nnh346-source/modules/home/default.nix': "24.11"
```
2024-10-26 19:13:08 +01:00
72e7aead94 sapling: set ui.username
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-26 18:49:14 +01:00
b5489abf98 ssh: allow on all ports for sodium/phoenix
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-26 18:15:31 +01:00
9970dc413d boron: persist ssh key
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-26 15:04:14 +01:00
b3fb80811c chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-26 14:00:16 +00:00
4c7a99bfb7 home-manager: enable neovim
All checks were successful
flake / flake (push) Successful in 1m29s
2024-10-26 00:39:35 +01:00
edad7248a5 chore(deps): update actions/checkout action to v4.2.2
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-24 23:00:54 +00:00
ce6c9a25c5 flake: fix unstable path pointing at github instead of gitea
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-24 09:39:20 +01:00
3a0c8effbb flake: use generic branch from forked nixpkgs
All checks were successful
flake / flake (push) Successful in 1m29s
Rather than pulling a specific feature branch into `nixpkgs-unstable`, pull a branch for `nixos-unstable`. Then we can cherry pick and rebase multiple features in the `nixpkgs` fork and keep up more easily with upstream.

The preferred method here would be a list of PRs or URLs to patch into the flake input, but that doesn't seem to be an option.

This change adds in the resilio gid PR, but is otherwise unchanged.
2024-10-23 23:27:53 +01:00
172e6c7415 router: enable ssh on eth0 and add work mbp key
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-23 21:06:24 +01:00
9a18124847 phoenix: enable zswap
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 23:00:04 +01:00
efbf9575f2 phoenix: enable plex
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 22:27:12 +01:00
e03ce4e26c phoenix: enable resilio sync and backups
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-21 20:49:13 +01:00
b18ae44ccb resilio: place storagePath in directoryPath by default
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-21 08:54:20 +01:00
e80ef10eb7 resilio: calculate default deviceName automatically
Some checks failed
flake / flake (push) Has been cancelled
2024-10-21 08:54:20 +01:00
26beb4116a phoenix: serve restic
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-21 00:39:36 +01:00
1822d07cfe phoenix: enable downloads
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-21 00:20:42 +01:00
a6efbb1b68 phoenix: import practical-defiant-coffee zpool
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 20:07:59 +01:00
6fe4ca5b61 phoenix: mount disk btrfs partitions and add chia
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-20 20:07:59 +01:00
3e8dcd359e secrets: clean up tywin secrets
All checks were successful
flake / flake (push) Successful in 1m26s
2024-10-20 20:07:16 +01:00
86bca8ce1c tywin: prepare for zpool export
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-20 19:37:26 +01:00
ee3b420220 backups/git: move tywin->phoenix
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 17:40:20 +01:00
58ce44df6b phoenix: add chia
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 16:29:55 +01:00
f34592926e phoenix: init host
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-20 16:07:21 +01:00
7dd820685f backup-git: fix systemd timer
All checks were successful
flake / flake (push) Successful in 1m28s
2024-10-19 18:30:57 +01:00
4047b0d8b2 router: reserve ips for nanokvms
All checks were successful
flake / flake (push) Successful in 1m27s
2024-10-19 16:53:35 +01:00
d7a8562c7d restic: modularise server component
All checks were successful
flake / flake (push) Successful in 1m25s
2024-10-19 15:24:32 +01:00
ea163448df homeassistant: enable waze
All checks were successful
flake / flake (push) Successful in 1m23s
2024-10-19 00:39:33 +01:00
a8288ec678 scx_layered: get from forked nixpkgs
All checks were successful
flake / flake (push) Successful in 1m24s
2024-10-18 13:56:40 +01:00
50a8411ac8 nixos: add nixpkgs-unstable to flake registry
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-13 00:33:57 +01:00
6f5b9430c9 prometheus: add alert for resilio sync going down
All checks were successful
flake / flake (push) Successful in 1m17s
2024-10-12 21:39:00 +01:00
33cdcdca0a prometheus: enable systemd collector
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-12 15:27:13 +01:00
c42a4e5297 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m43s
2024-10-12 13:37:53 +00:00
2656c0dba9 scx_lavd: package and ship
All checks were successful
flake / flake (push) Successful in 1m18s
2024-10-12 00:54:02 +01:00
961acd80d7 scx_layered: package and ship
All checks were successful
flake / flake (push) Successful in 1m14s
2024-10-11 20:15:55 +01:00
eb07e4c4fd chore(deps): update actions/checkout action to v4.2.1
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-07 23:00:25 +00:00
4eaae0fa75 isponsorblocktv: deploy docker container
All checks were successful
flake / flake (push) Successful in 1m18s
2024-10-06 21:38:06 +01:00
72955e2377 homeassistant: announce locally and deploy to hallway tablet
All checks were successful
flake / flake (push) Successful in 1m17s
2024-10-06 20:43:48 +01:00
0a2330cb90 www: fix cloning script
All checks were successful
flake / flake (push) Successful in 1m15s
2024-10-06 16:35:59 +01:00
3d8a60da5b sched_ext: bump kernel to 6.12-rc1
All checks were successful
flake / flake (push) Successful in 1m13s
Removes the custom kernel features and requires any host running
sched_ext to pull a kernel at least 6.12. Looks at
pkgs.unstable.linuxPackages first, if that's too old it falls back to
pkgs.linuxPackages_latest, and if that's too old it goes for
pkgs.unstable.linuxPackages_testing.

The plan is to leave `boot.kernelPackages` alone if new enough, but
we'll keep the assertion. Some schedulers might require more specific
kernel constraints in the future.
2024-10-03 00:17:59 +01:00
c0e331bf80 boron: enable resilio sync
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-28 15:01:30 +01:00
9c419376c5 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m19s
2024-09-28 12:31:15 +00:00
4332fee3ce chore(deps): update actions/checkout action to v4.2.0
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-26 23:00:18 +00:00
ceb8591705 step-ca: pin uid and gid
All checks were successful
flake / flake (push) Successful in 1m14s
2024-09-23 20:30:35 +01:00
415a061842 prometheus: move id pinning to correct module
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-23 20:26:34 +01:00
31a9828430 prometheus: add service and enable reporting globally (#330)
All checks were successful
flake / flake (push) Successful in 1m15s
## Test plan:

- https://prometheus.ts.hillion.co.uk/graph?g0.expr=1%20-%20(node_filesystem_avail_bytes%7Bmountpoint%20%3D%20%22%2F%22%2C%20device%3D%22tmpfs%22%7D%20%2F%20node_filesystem_size_bytes%7Bmountpoint%20%3D%20%22%2F%22%2C%20device%3D%22tmpfs%22%7D)&g0.tab=0&g0.display_mode=lines&g0.show_exemplars=0&g0.range_input=1h - reports percentage used on all tmpfs roots. This is exactly what I wanted, in the future I might add alerts for it as high tmpfs usage is a sign of something being wrong and is likely to lead to OOMing.

Aside: NixOS is awesome. I just deployed full monitoring to every host I have and all future hosts in minutes.
Reviewed-on: #330
Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Co-committed-by: Jake Hillion <jake@hillion.co.uk>
2024-09-23 20:24:31 +01:00
7afa21e537 chia: update to 2.4.3
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-22 21:09:31 +01:00
739e1f6ab3 home: move tailscale exit node from microserver to router (#328)
All checks were successful
flake / flake (push) Successful in 1m15s
## Test plan:

- Connected MacBook to iPhone hotspot (off network).
- With Tailscale connected can ping/ssh to microserver.home on both LANs (main and IoT).
- With exit node enabled traceroute shows router's tailscale IP as a hop.
- With exit node enabled ipinfo.io shows my home IP.
- With exit node disabled ipinfo.io shows an EE IP.

iPhone exit node is still playing up, it shows no Internet connection. This behaviour was identical with the Pi setup that this replaces, maybe an iOS 18 bug for Tailscale? Treating this as not a regression.
Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Co-committed-by: Jake Hillion <jake@hillion.co.uk>
2024-09-22 21:04:53 +01:00
8933d38d36 sched_ext: ship pre-release 6.12 kernel
All checks were successful
flake / flake (push) Successful in 1m14s
2024-09-22 16:18:04 +01:00
0ad31dddae gendry: decrypt encrypted disk with clevis/tang
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-22 11:06:03 +01:00
d5c2f8d543 router: setup cameras vlan
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-17 09:20:27 +01:00
1189a41df9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m43s
2024-09-15 16:01:08 +00:00
39730d2ec3 macbook: add shell utilities
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-14 02:39:26 +01:00
ac6f285400 resilio: require mounts be available
All checks were successful
flake / flake (push) Successful in 1m15s
Without this resilio fails on boot on tywin.storage where the paths are
on a ZFS array which gets mounted reliably later than the resilio
service attempts to start.
2024-09-14 02:30:20 +01:00
e4b8fd7438 chore(deps): update determinatesystems/nix-installer-action action to v14
All checks were successful
flake / flake (push) Successful in 1m27s
2024-09-10 00:00:51 +00:00
24be3394bc chore(deps): update determinatesystems/magic-nix-cache-action action to v8
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-09 23:00:50 +00:00
ba053c539c boron: enable podman
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-06 19:04:25 +01:00
3aeeb69c2b nix-darwin: add macbook
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-05 00:50:02 +01:00
85246af424 caddy: update to unstable
All checks were successful
flake / flake (push) Successful in 1m13s
The default config for automatic ACME no longer works in Caddy <2.8.0.
This is due to changes with ZeroSSL's auth. Update to unstable Caddy
which is new enough to renew certs again.

Context: https://github.com/caddyserver/caddy/releases/tag/v2.8.0

Add `pkgs.unstable` as an overlay as recommended on the NixOS wiki. This
is needed here as Caddy must be runnable on all architectures.
2024-09-05 00:04:08 +01:00
ba7a39b66e chore(deps): pin dependencies
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-02 23:00:12 +00:00
df31ebebf8 boron: bump tmpfs to 100% of RAM
All checks were successful
flake / flake (push) Successful in 1m18s
2024-08-31 22:04:38 +01:00
2f3a33ad8e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 19:01:46 +00:00
343b34b4dc boron: support sched_ext in kernel
All checks were successful
flake / flake (push) Successful in 1m45s
2024-08-30 18:52:31 +01:00
264799952e bathroom_light: trust switchbot if more recently updated
All checks were successful
flake / flake (push) Successful in 1m13s
2024-08-30 18:46:38 +01:00
5cef32cf1e gitea actions: use cache for nix
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 18:39:02 +01:00
6cc70e117d tywin: mount d7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-22 15:17:11 +01:00
a52aed5778 gendry: use zram swap
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-18 13:51:28 +01:00
70b53b5c01 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-18 10:35:15 +00:00
3d642e2320 boron: move postgresqlBackup to disk to reduce ram pressure
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-09 23:37:16 +01:00
41d5f0cc53 homeassistant: add sonos
All checks were successful
flake / flake (push) Successful in 1m17s
2024-08-08 18:31:10 +01:00
974c947130 homeassistant: add smartthings
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 18:15:34 +01:00
8a9498f8d7 homeassistant: expose sleep_mode to google
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 17:56:32 +01:00
2ecdafe1cf chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m32s
2024-08-03 12:15:23 +01:00
db5dc5aee6 step-ca: enable server on sodium and load root certs
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-01 23:28:22 +01:00
f96f03ba0c boron: update to Linux 6.10
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-27 15:16:59 +01:00
e81cad1670 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m25s
2024-07-26 13:40:49 +00:00
67c8e3dcaf homeassistant: migrate to basnijholt/adaptive-lighting
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-22 11:16:34 +01:00
1052379119 unifi: switch to nixos module
All checks were successful
flake / flake (push) Successful in 1m24s
2024-07-19 16:43:53 +01:00
0edb8394c8 tywin: mount d6
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 22:19:41 +01:00
bbab551b0f be.lt: connect to Hillion WPA3 Network
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 17:10:08 +01:00
13c937b196 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 14:14:27 +00:00
6bdaca40e0 tmux: index from 0 and always allow attach
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 15:02:19 +01:00
462f0eecf4 gendry: allow luks discards
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 09:33:33 +01:00
5dcf3b8e3f chia: update to 2.4.1
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-10 10:01:16 +01:00
b0618cd3dc chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-07-06 22:31:08 +00:00
a9829eea9e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m30s
2024-06-23 10:18:56 +00:00
cfd64e9a73 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-06-16 12:13:03 +00:00
b3af1739a8 chore(deps): update actions/checkout action to v4.1.7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-06-13 23:01:06 +00:00
cde6bdd498 tywin: enable clevis/tang for boot
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-10 22:34:28 +01:00
bd5efa3648 tywin: encrypt root disk
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-09 23:14:44 +01:00
30679f9f4b sodium: add cache directory on the sd card
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 22:41:49 +01:00
67644162e1 sodium: rekey
All checks were successful
flake / flake (push) Successful in 1m12s
accidentally ran `rm -r /data`...
2024-06-02 21:45:03 +01:00
81c77de5ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 13:15:18 +00:00
a0f93c73d0 sodium.pop: add rpi5 host
All checks were successful
flake / flake (push) Successful in 1m22s
2024-05-25 22:56:27 +01:00
78705d440a homeassistant: only switch bathroom light when it is already on
All checks were successful
flake / flake (push) Successful in 1m18s
Although the system now knows whether the bathroom light is on, it switches the switch every time the light should be turned off regardless of if it's already off. Because this is a device running on battery that performs a physical movement this runs the battery out very fast. Adjust the system to only switch the light off if it thinks it's on, even though this has the potential for desyncs.
2024-05-25 22:03:11 +01:00
3f829236a2 homeassistant: read bathroom light status from motion sensor
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-25 17:03:57 +01:00
7b221eda07 theon: stop scripting networking
All checks were successful
flake / flake (push) Successful in 1m19s
Unsure why this host is using systemd-networkd, but leave that unchanged
and have NixOS know about it to prevent a warning about loss of
connectivity on build.
2024-05-25 16:40:19 +01:00
22305815c6 matrix: fix warning about renamed sliding sync
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-25 16:33:05 +01:00
fa493123fc router: dhcp: add APC vendor specific cookie
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 22:14:16 +01:00
62e61bec8a matrix: add sliding sync
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 10:18:30 +01:00
50d70ed8bc boron: update kernel to 6.9
All checks were successful
flake / flake (push) Successful in 1m19s
2024-05-23 22:41:18 +01:00
796bbc7a68 chore(deps): update nixpkgs to nixos-24.05 (#271)
All checks were successful
flake / flake (push) Successful in 1m20s
This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [nixpkgs](https://github.com/NixOS/nixpkgs) | major | `nixos-23.11` -> `nixos-24.05` |

---

### Release Notes

<details>
<summary>NixOS/nixpkgs (nixpkgs)</summary>

### [`vnixos-24.05`](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

[Compare Source](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNzQuMyIsInVwZGF0ZWRJblZlciI6IjM3LjM3NC4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Reviewed-on: #271
Co-authored-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
Co-committed-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
2024-05-23 22:40:58 +01:00
8123653a92 jorah.cx: delete
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-21 22:43:56 +01:00
55ade830a8 router: add more dhcp reservations
All checks were successful
flake / flake (push) Successful in 1m12s
2024-05-21 20:09:26 +01:00
a9c9600b14 matrix: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m20s
2024-05-18 19:14:39 +01:00
eae5e105ff unifi: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m21s
2024-05-18 16:52:22 +01:00
f1fd6ee270 gitea: fix ips in iptables rules
All checks were successful
flake / flake (push) Successful in 1m10s
2024-05-18 15:34:43 +01:00
1dc370709a chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m23s
2024-05-18 13:03:00 +00:00
de905e23a8 chore(deps): update actions/checkout action to v4.1.6
All checks were successful
flake / flake (push) Successful in 2m39s
2024-05-17 00:01:14 +00:00
9247ae5d91 chore(deps): update cachix/install-nix-action action to v27
All checks were successful
flake / flake (push) Successful in 2m33s
2024-05-16 23:01:13 +00:00
7298955391 tywin: enable automatic btrfs scrubbing
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-15 20:53:57 +01:00
f59824ad62 gitea: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-12 13:11:54 +01:00
bff93529aa www.global: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-12 12:11:15 +01:00
13bfe6f787 boron: enable authoritative dns
All checks were successful
flake / flake (push) Successful in 2m4s
2024-05-10 22:44:48 +01:00
ad8c8b9b19 boron: enable version_tracker
All checks were successful
flake / flake (push) Successful in 2m5s
2024-05-10 22:12:49 +01:00
b7c07d0107 boron: enable gitea actions
All checks were successful
flake / flake (push) Successful in 2m28s
2024-05-10 21:52:48 +01:00
9cc389f865 boron: remove folding@home
All checks were successful
flake / flake (push) Successful in 1m58s
The update from 23.11 to 24.05 brought a new folding@home version that
doesn't work. It doesn't work by default and there is 0 documentation on
writing xml configs manually, and even the web UI redirects to a
nonsense public website now. Unfortunately this means I'm going to let
this box idle rather than doing useful work, for now at least.
2024-05-10 21:02:16 +01:00
2153c22d7f chore(deps): update actions/checkout action to v4.1.5
All checks were successful
flake / flake (push) Successful in 2m1s
2024-05-09 23:00:45 +00:00
a4235b2581 boron: move to kernel 6.8 and re-image
All checks were successful
flake / flake (push) Successful in 1m58s
The extremely modern hardware on this server appears to experience
kernel crashes with the default NixOS 23.11 kernel 6.1 and the default
NixOS 24.05 kernel 6.6. Empirical testing shows the server staying up on
Ubuntu 22's 6.2 and explicit NixOS kernel 6.8.

The server was wiped during this testing so now needs reimaging.
2024-05-08 21:11:09 +01:00
36ce6ca185 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m23s
2024-05-08 16:39:43 +00:00
e3887e320e tywin: add zram swap
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-06 23:25:08 +01:00
a272cd0661 downloads: add explicit nameservers
All checks were successful
flake / flake (push) Successful in 1m48s
2024-05-06 00:07:25 +01:00
1ca4daab9c locations: move attrset into config block
All checks were successful
flake / flake (push) Successful in 1m42s
2024-04-28 10:39:40 +01:00
745ea58dec homeassistant: update trusted proxies
All checks were successful
flake / flake (push) Successful in 1m46s
2024-04-27 19:14:12 +01:00
348bca745b jorah: add authoritative dns server
All checks were successful
flake / flake (push) Successful in 1m44s
2024-04-27 18:54:46 +01:00
0ef24c14e7 tailscale: update to included nixos module
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-27 15:36:45 +01:00
d9233021c7 add enable options for modules/common/default
All checks were successful
flake / flake (push) Successful in 2m9s
2024-04-27 13:46:06 +01:00
b39549e1a9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m11s
2024-04-27 11:00:24 +00:00
8fdd915e76 router.home: enable unbound dns server
All checks were successful
flake / flake (push) Successful in 2m0s
2024-04-26 21:40:17 +01:00
62d62500ae renovate: fix gitea actions schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m59s
Also pin patch version of Gitea actions. This should treat each future
update as an update rather than a relock which should show the changelog
in the PR.
2024-04-25 19:40:00 +01:00
b012d48e1d chore(deps): update actions/checkout digest to 0ad4b8f
All checks were successful
flake / flake (push) Successful in 1m55s
2024-04-25 15:01:12 +00:00
eba1dae06b chia: update to 2.2.1
All checks were successful
flake / flake (push) Successful in 1m59s
2024-04-24 23:36:51 +01:00
b6ef41cae0 renovate: auto-merge github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m49s
2024-04-23 22:24:28 +01:00
700ca88feb chore(deps): pin cachix/install-nix-action action to 8887e59
All checks were successful
flake / flake (push) Successful in 1m52s
2024-04-23 19:58:42 +00:00
1c75fa88a7 boron.cx: add new dedicated server
All checks were successful
flake / flake (push) Successful in 1m49s
2024-04-23 20:45:44 +01:00
c3447b3ec9 renovate: extend recommended config and pin github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m37s
2024-04-23 20:27:40 +01:00
5350581676 gitea.actions: actually check formatting
All checks were successful
flake / flake (push) Successful in 1m38s
2024-04-23 20:23:54 +01:00
4d1521e4b4 be.lt: add beryllium laptop
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-21 17:15:14 +01:00
88b33598d7 microserver.parents -> li.pop
All checks were successful
flake / flake (push) Successful in 1m30s
2024-04-20 13:45:00 +01:00
4a09f50889 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m35s
2024-04-19 17:02:52 +00:00
cf76a055e7 renovate: always rebase and update schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m30s
2024-04-17 23:22:29 +01:00
f4f6c66098 jorah: enable zramSwap
All checks were successful
flake / flake (push) Successful in 1m29s
2024-04-17 23:04:04 +01:00
2c432ce986 jorah: start folding@home
All checks were successful
flake / flake (push) Successful in 1m9s
2024-04-17 22:55:52 +01:00
d6b15a1f25 known_hosts: add jorah and theon
All checks were successful
flake / flake (push) Successful in 1m10s
2024-04-14 14:35:38 +01:00
bd34d0e3ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m34s
2024-04-14 11:02:06 +00:00
52caf6edf9 gitea.actions: nixify basic docker runner
All checks were successful
flake / flake (push) Successful in 1m37s
2024-04-14 00:09:28 +01:00
016d0e61b5 www: proxy some domains via cloudflare
All checks were successful
flake / flake (push) Successful in 3m38s
2024-04-13 23:02:22 +01:00
b4a33bb6b2 jorah: fix dual networking setup
All checks were successful
flake / flake (push) Successful in 3m35s
2024-04-13 16:45:20 +01:00
167 changed files with 4343 additions and 1372 deletions

View File

@ -11,14 +11,13 @@ jobs:
flake: flake:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Prepare for Nix installation - uses: DeterminateSystems/nix-installer-action@b92f66560d6f97d6576405a7bae901ab57e72b6a # v15
run: | - uses: DeterminateSystems/magic-nix-cache-action@87b14cf437d03d37989d87f0fa5ce4f5dc1a330b # v8
apt-get update
apt-get install -y sudo
- uses: cachix/install-nix-action@v26
- name: lint - name: lint
run: nix fmt run: |
nix fmt
git diff --exit-code
- name: flake check - name: flake check
run: nix flake check --all-systems run: nix flake check --all-systems
timeout-minutes: 10 timeout-minutes: 10

View File

@ -10,3 +10,4 @@ Raspberry Pi images that support Tailscale and headless SSH can be built using a
nixos-generate -f sd-aarch64-installer --system aarch64-linux -c hosts/microserver.home.ts.hillion.co.uk/default.nix nixos-generate -f sd-aarch64-installer --system aarch64-linux -c hosts/microserver.home.ts.hillion.co.uk/default.nix
cp SOME_OUTPUT out.img.zst cp SOME_OUTPUT out.img.zst
Alternatively, a Raspberry Pi image with headless SSH can be easily built using the logic in [this repo](https://github.com/Robertof/nixos-docker-sd-image-builder/tree/master).

View File

@ -0,0 +1,27 @@
{ config, pkgs, ... }:
{
config = {
system.stateVersion = 4;
networking.hostName = "jakehillion-mba-m2-15";
nix = {
useDaemon = true;
};
programs.zsh.enable = true;
security.pam.enableSudoTouchIdAuth = true;
environment.systemPackages = with pkgs; [
fd
htop
mosh
neovim
nix
ripgrep
sapling
];
};
}

View File

@ -2,7 +2,9 @@
"nodes": { "nodes": {
"agenix": { "agenix": {
"inputs": { "inputs": {
"darwin": "darwin", "darwin": [
"darwin"
],
"home-manager": [ "home-manager": [
"home-manager" "home-manager"
], ],
@ -12,11 +14,11 @@
"systems": "systems" "systems": "systems"
}, },
"locked": { "locked": {
"lastModified": 1712079060, "lastModified": 1723293904,
"narHash": "sha256-/JdiT9t+zzjChc5qQiF+jhrVhRt8figYH29rZO7pFe4=", "narHash": "sha256-b+uqzj+Wa6xgMS9aNbX4I+sXeb5biPDi39VgvSFqFvU=",
"owner": "ryantm", "owner": "ryantm",
"repo": "agenix", "repo": "agenix",
"rev": "1381a759b205dff7a6818733118d02253340fd5e", "rev": "f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -28,35 +30,53 @@
"darwin": { "darwin": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
"agenix",
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1700795494, "lastModified": 1731153869,
"narHash": "sha256-gzGLZSiOhf155FW7262kdHo2YDeugp3VuIFb4/GGng0=", "narHash": "sha256-3Ftf9oqOypcEyyrWJ0baVkRpvQqroK/SVBFLvU3nPuc=",
"owner": "lnl7", "owner": "lnl7",
"repo": "nix-darwin", "repo": "nix-darwin",
"rev": "4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d", "rev": "5c74ab862c8070cbf6400128a1b56abb213656da",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "lnl7", "owner": "lnl7",
"ref": "master",
"repo": "nix-darwin", "repo": "nix-darwin",
"type": "github" "type": "github"
} }
}, },
"disko": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1731060864,
"narHash": "sha256-aYE7oAYZ+gPU1mPNhM0JwLAQNgjf0/JK1BF1ln2KBgk=",
"owner": "nix-community",
"repo": "disko",
"rev": "5e40e02978e3bd63c2a6a9fa6fa8ba0e310e747f",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
"flake-utils": { "flake-utils": {
"inputs": { "inputs": {
"systems": "systems_2" "systems": "systems_2"
}, },
"locked": { "locked": {
"lastModified": 1710146030, "lastModified": 1726560853,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=", "narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a", "rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -72,27 +92,47 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1712386041, "lastModified": 1726989464,
"narHash": "sha256-dA82pOMQNnCJMAsPG7AXG35VmCSMZsJHTFlTHizpKWQ=", "narHash": "sha256-Vl+WVTJwutXkimwGprnEtXc/s/s8sMuXzqXaspIGlwM=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "d6bb9f934f2870e5cbc5b94c79e9db22246141ff", "rev": "2f23fa308a7c067e52dfcc30a0758f47043ec176",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-24.05",
"repo": "home-manager",
"type": "github"
}
},
"home-manager-unstable": {
"inputs": {
"nixpkgs": [
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1730837930,
"narHash": "sha256-0kZL4m+bKBJUBQse0HanewWO0g8hDdCvBhudzxgehqc=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "2f607e07f3ac7e53541120536708e824acccfaa8",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nix-community", "owner": "nix-community",
"ref": "release-23.11",
"repo": "home-manager", "repo": "home-manager",
"type": "github" "type": "github"
} }
}, },
"impermanence": { "impermanence": {
"locked": { "locked": {
"lastModified": 1708968331, "lastModified": 1730403150,
"narHash": "sha256-VUXLaPusCBvwM3zhGbRIJVeYluh2uWuqtj4WirQ1L9Y=", "narHash": "sha256-W1FH5aJ/GpRCOA7DXT/sJHFpa5r8sq2qAUncWwRZ3Gg=",
"owner": "nix-community", "owner": "nix-community",
"repo": "impermanence", "repo": "impermanence",
"rev": "a33ef102a02ce77d3e39c25197664b7a636f9c30", "rev": "0d09341beeaa2367bac5d718df1404bf2ce45e6f",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -102,44 +142,60 @@
"type": "github" "type": "github"
} }
}, },
"nixpkgs": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1712437997, "lastModified": 1730919458,
"narHash": "sha256-g0whLLwRvgO2FsyhY8fNk+TWenS3jg5UdlWL4uqgFeo=", "narHash": "sha256-yMO0T0QJlmT/x4HEyvrCyigGrdYfIXX3e5gWqB64wLg=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixos-hardware",
"rev": "e38d7cb66ea4f7a0eb6681920615dfcc30fc2920", "rev": "e1cc1f6483393634aee94514186d21a4871e78d7",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nixos", "owner": "nixos",
"ref": "nixos-23.11", "repo": "nixos-hardware",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1730963269,
"narHash": "sha256-rz30HrFYCHiWEBCKHMffHbMdWJ35hEkcRVU0h7ms3x0=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "83fb6c028368e465cd19bb127b86f971a5e41ebc",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.05",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }
}, },
"nixpkgs-unstable": { "nixpkgs-unstable": {
"locked": { "locked": {
"lastModified": 1712439257, "lastModified": 1730867498,
"narHash": "sha256-aSpiNepFOMk9932HOax0XwNxbA38GOUVOiXfUVPOrck=", "narHash": "sha256-Ce3a1w7Qf+UEPjVJcXxeSiWyPMngqf1M2EIsmqiluQw=",
"owner": "nixos", "rev": "9240e11a83307a6e8cf2254340782cba4aa782fd",
"repo": "nixpkgs", "type": "tarball",
"rev": "ff0dbd94265ac470dda06a657d5fe49de93b4599", "url": "https://gitea.hillion.co.uk/api/v1/repos/JakeHillion/nixpkgs/archive/9240e11a83307a6e8cf2254340782cba4aa782fd.tar.gz"
"type": "github"
}, },
"original": { "original": {
"owner": "nixos", "type": "tarball",
"ref": "nixos-unstable", "url": "https://gitea.hillion.co.uk/JakeHillion/nixpkgs/archive/nixos-unstable.tar.gz"
"repo": "nixpkgs",
"type": "github"
} }
}, },
"root": { "root": {
"inputs": { "inputs": {
"agenix": "agenix", "agenix": "agenix",
"darwin": "darwin",
"disko": "disko",
"flake-utils": "flake-utils", "flake-utils": "flake-utils",
"home-manager": "home-manager", "home-manager": "home-manager",
"home-manager-unstable": "home-manager-unstable",
"impermanence": "impermanence", "impermanence": "impermanence",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable" "nixpkgs-unstable": "nixpkgs-unstable"
} }

130
flake.nix
View File

@ -1,61 +1,109 @@
{ {
inputs = { inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-23.11"; nixpkgs.url = "github:nixos/nixpkgs/nixos-24.05";
nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable"; nixpkgs-unstable.url = "https://gitea.hillion.co.uk/JakeHillion/nixpkgs/archive/nixos-unstable.tar.gz";
nixos-hardware.url = "github:nixos/nixos-hardware";
flake-utils.url = "github:numtide/flake-utils"; flake-utils.url = "github:numtide/flake-utils";
darwin.url = "github:lnl7/nix-darwin";
darwin.inputs.nixpkgs.follows = "nixpkgs";
agenix.url = "github:ryantm/agenix"; agenix.url = "github:ryantm/agenix";
agenix.inputs.nixpkgs.follows = "nixpkgs"; agenix.inputs.nixpkgs.follows = "nixpkgs";
agenix.inputs.darwin.follows = "darwin";
agenix.inputs.home-manager.follows = "home-manager"; agenix.inputs.home-manager.follows = "home-manager";
home-manager.url = "github:nix-community/home-manager/release-23.11"; home-manager.url = "github:nix-community/home-manager/release-24.05";
home-manager.inputs.nixpkgs.follows = "nixpkgs"; home-manager.inputs.nixpkgs.follows = "nixpkgs";
home-manager-unstable.url = "github:nix-community/home-manager";
home-manager-unstable.inputs.nixpkgs.follows = "nixpkgs-unstable";
impermanence.url = "github:nix-community/impermanence/master"; impermanence.url = "github:nix-community/impermanence/master";
disko.url = "github:nix-community/disko";
disko.inputs.nixpkgs.follows = "nixpkgs";
}; };
description = "Hillion Nix flake"; description = "Hillion Nix flake";
outputs = { self, nixpkgs, nixpkgs-unstable, flake-utils, agenix, home-manager, impermanence, ... }@inputs: { outputs =
nixosConfigurations = { self
let , agenix
fqdns = builtins.attrNames (builtins.readDir ./hosts); , darwin
getSystemOverlays = system: nixpkgsConfig: [ , disko
(final: prev: { , flake-utils
"storj" = final.callPackage ./pkgs/storj.nix { }; , home-manager
}) , home-manager-unstable
]; , impermanence
mkHost = fqdn: , nixos-hardware
let system = builtins.readFile ./hosts/${fqdn}/system; , nixpkgs
in , nixpkgs-unstable
nixpkgs.lib.nixosSystem { , ...
inherit system; }@inputs:
specialArgs = inputs; let
modules = [ getSystemOverlays = system: nixpkgsConfig: [
./hosts/${fqdn}/default.nix (final: prev: {
./modules/default.nix unstable = nixpkgs-unstable.legacyPackages.${prev.system};
agenix.nixosModules.default "storj" = final.callPackage ./pkgs/storj.nix { };
impermanence.nixosModules.impermanence })
];
in
{
nixosConfigurations =
let
fqdns = builtins.attrNames (builtins.readDir ./hosts);
mkHost = fqdn:
let
system = builtins.readFile ./hosts/${fqdn}/system;
func = if builtins.pathExists ./hosts/${fqdn}/unstable then nixpkgs-unstable.lib.nixosSystem else nixpkgs.lib.nixosSystem;
home-manager-pick = if builtins.pathExists ./hosts/${fqdn}/unstable then home-manager-unstable else home-manager;
in
func {
inherit system;
specialArgs = inputs;
modules = [
./hosts/${fqdn}/default.nix
./modules/default.nix
home-manager.nixosModules.default agenix.nixosModules.default
{ impermanence.nixosModules.impermanence
home-manager.sharedModules = [ disko.nixosModules.disko
impermanence.nixosModules.home-manager.impermanence
];
}
({ config, ... }: { home-manager-pick.nixosModules.default
nix.registry.nixpkgs.flake = nixpkgs; # pin `nix shell` nixpkgs {
system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev; home-manager.sharedModules = [
nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config; impermanence.nixosModules.home-manager.impermanence
}) ];
]; }
};
in ({ config, ... }: {
nixpkgs.lib.genAttrs fqdns mkHost; system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev;
} // flake-utils.lib.eachDefaultSystem (system: { nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config;
formatter = nixpkgs.legacyPackages.${system}.nixpkgs-fmt; })
}); ];
};
in
nixpkgs.lib.genAttrs fqdns mkHost;
darwinConfigurations = {
jakehillion-mba-m2-15 = darwin.lib.darwinSystem {
system = "aarch64-darwin";
specialArgs = inputs;
modules = [
./darwin/jakehillion-mba-m2-15/configuration.nix
({ config, ... }: {
nixpkgs.overlays = getSystemOverlays "aarch64-darwin" config.nixpkgs.config;
})
];
};
};
} // flake-utils.lib.eachDefaultSystem (system: {
formatter = nixpkgs.legacyPackages.${system}.nixpkgs-fmt;
});
} }

View File

@ -0,0 +1,55 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "be";
networking.domain = "lt.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Impermanence
custom.impermanence = {
enable = true;
userExtraFiles.jake = [
".ssh/id_ecdsa_sk_keys"
];
};
## WiFi
age.secrets."wifi/be.lt.ts.hillion.co.uk".file = ../../secrets/wifi/be.lt.ts.hillion.co.uk.age;
networking.wireless = {
enable = true;
environmentFile = config.age.secrets."wifi/be.lt.ts.hillion.co.uk".path;
networks = {
"Hillion WPA3 Network".psk = "@HILLION_WPA3_NETWORK_PSK@";
};
};
## Desktop
custom.users.jake.password = true;
custom.desktop.awesome.enable = true;
## Tailscale
age.secrets."tailscale/be.lt.ts.hillion.co.uk".file = ../../secrets/tailscale/be.lt.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/be.lt.ts.hillion.co.uk".path;
};
security.sudo.wheelNeedsPassword = lib.mkForce true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
};
}

View File

@ -0,0 +1,59 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/D184-A79B";
fsType = "vfat";
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/c8ffa91a-5152-4d84-8995-01232fd5acd6";
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=data" ];
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp0s20f0u1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp1s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
powerManagement.cpuFreqGovernor = lib.mkDefault "powersave";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1,7 @@
# boron.cx.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/boron.cx.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,13 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://80.229.251.26:7654"
},
{
"url": "http://185.240.111.53:7654"
}
]
}
}

View File

@ -0,0 +1,181 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "boron";
networking.domain = "cx.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [ "ip=dhcp" ];
boot.initrd = {
availableKernelModules = [ "igb" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
"disk1-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
## Kernel
### Explicitly use the latest kernel at time of writing because the LTS
### kernels available in NixOS do not seem to support this server's very
### modern hardware.
### custom.sched_ext.enable implies >=6.12, if this is removed the kernel may need to be pinned again. >=6.10 seems good.
custom.sched_ext.enable = true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
userExtraFiles.jake = [
".ssh/id_ecdsa"
".ssh/id_rsa"
];
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/system
btrfs subvolume snapshot /cache/empty_snapshot /cache/system
'';
## Custom Services
custom = {
locations.autoServe = true;
www.global.enable = true;
services = {
gitea.actions = {
enable = true;
tokenSecret = ../../secrets/gitea/actions/boron.age;
};
};
};
services.nsd.interfaces = [
"138.201.252.214"
"2a01:4f8:173:23d2::2"
];
## Enable ZRAM to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Filesystems
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# By default both /data and /nix would be scrubbed. They are the same filesystem so this is wasteful.
fileSystems = [ "/data" ];
};
## Resilio
custom.resilio = {
enable = true;
folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
};
services.resilio.directoryRoot = "/data/sync";
## General usability
### Make podman available for dev tools such as act
virtualisation = {
containers.enable = true;
podman = {
enable = true;
dockerCompat = true;
dockerSocket.enable = true;
};
};
users.users.jake.extraGroups = [ "podman" ];
## Networking
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
"net.ipv6.conf.all.forwarding" = true;
};
networking = {
useDHCP = false;
interfaces = {
enp6s0 = {
name = "eth0";
useDHCP = true;
ipv6.addresses = [{
address = "2a01:4f8:173:23d2::2";
prefixLength = 64;
}];
};
};
defaultGateway6 = {
address = "fe80::1";
interface = "eth0";
};
};
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
22 # SSH
3022 # SSH (Gitea) - redirected to 22
53 # DNS
80 # HTTP 1-2
443 # HTTPS 1-2
8080 # Unifi (inform)
];
allowedUDPPorts = lib.mkForce [
53 # DNS
443 # HTTP 3
3478 # Unifi STUN
];
};
};
};
## Tailscale
age.secrets."tailscale/boron.cx.ts.hillion.co.uk".file = ../../secrets/tailscale/boron.cx.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/boron.cx.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,72 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" "size=100%" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/ED9C-4ABC";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=data" ];
};
fileSystems."/cache" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=cache" ];
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."disk0-crypt" = {
device = "/dev/disk/by-uuid/a68ead16-1bdc-4d26-9e55-62c2be11ceee";
allowDiscards = true;
};
boot.initrd.luks.devices."disk1-crypt" = {
device = "/dev/disk/by-uuid/19bde205-bee4-430d-a4c1-52d635a23963";
allowDiscards = true;
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1,7 @@
# gendry.jakehillion-terminals.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/gendry.jakehillion-terminals.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,14 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.64.50.21:7654"
},
{
"url": "http://10.64.50.25:7654"
}
]
}
}

View File

@ -2,8 +2,6 @@
{ {
imports = [ imports = [
../../modules/common/default.nix
../../modules/spotify/default.nix
./bluetooth.nix ./bluetooth.nix
./hardware-configuration.nix ./hardware-configuration.nix
]; ];
@ -17,6 +15,24 @@
boot.loader.systemd-boot.enable = true; boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true; boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
];
boot.initrd = {
availableKernelModules = [ "r8169" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices."root".secretFile = "/data/disk_encryption.jwe";
};
};
custom.defaults = true;
## Custom scheduler
custom.sched_ext.enable = true;
## Impermanence ## Impermanence
custom.impermanence = { custom.impermanence = {
enable = true; enable = true;
@ -29,6 +45,13 @@
]; ];
}; };
## Enable ZRAM swap to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Desktop ## Desktop
custom.users.jake.password = true; custom.users.jake.password = true;
custom.desktop.awesome.enable = true; custom.desktop.awesome.enable = true;
@ -36,9 +59,7 @@
## Resilio ## Resilio
custom.resilio.enable = true; custom.resilio.enable = true;
services.resilio.deviceName = "gendry.jakehillion-terminals";
services.resilio.directoryRoot = "/data/sync"; services.resilio.directoryRoot = "/data/sync";
services.resilio.storagePath = "/data/sync/.sync";
custom.resilio.folders = custom.resilio.folders =
let let
@ -61,9 +82,9 @@
## Tailscale ## Tailscale
age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".file = ../../secrets/tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk.age; age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".file = ../../secrets/tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk.age;
custom.tailscale = { services.tailscale = {
enable = true; enable = true;
preAuthKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path; authKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path;
}; };
security.sudo.wheelNeedsPassword = lib.mkForce true; security.sudo.wheelNeedsPassword = lib.mkForce true;
@ -76,19 +97,13 @@
boot.initrd.kernelModules = [ "amdgpu" ]; boot.initrd.kernelModules = [ "amdgpu" ];
services.xserver.videoDrivers = [ "amdgpu" ]; services.xserver.videoDrivers = [ "amdgpu" ];
## Spotify
home-manager.users.jake.services.spotifyd.settings = {
global = {
device_name = "Gendry";
device_type = "computer";
bitrate = 320;
};
};
users.users."${config.custom.user}" = { users.users."${config.custom.user}" = {
packages = with pkgs; [ packages = with pkgs; [
prismlauncher prismlauncher
]; ];
}; };
## Networking
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
}; };
} }

View File

@ -28,7 +28,10 @@
options = [ "subvol=nix" ]; options = [ "subvol=nix" ];
}; };
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e"; boot.initrd.luks.devices."root" = {
device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e";
allowDiscards = true;
};
fileSystems."/data" = fileSystems."/data" =
{ {

View File

@ -1,79 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.05";
networking.hostName = "jorah";
networking.domain = "cx.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
## Impermanence
custom.impermanence.enable = true;
## Custom Services
custom = {
locations.autoServe = true;
services.version_tracker.enable = true;
www.global.enable = true;
};
## Filesystems
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# By default both /data and /nix would be scrubbed. They are the same filesystem so this is wasteful.
fileSystems = [ "/data" ];
};
## Networking
systemd.network = {
enable = true;
networks."enp5s0".extraConfig = ''
[Match]
Name = enp5s0
[Network]
Address = 2a01:4f9:4b:3953::2/64
Gateway = fe80::1
'';
};
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
3022 # Gitea SSH (accessed via public 22)
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
enp5s0 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
8080 # Unifi (inform)
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
3478 # Unifi STUN
];
};
};
};
## Tailscale
age.secrets."tailscale/jorah.cx.ts.hillion.co.uk".file = ../../secrets/tailscale/jorah.cx.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/jorah.cx.ts.hillion.co.uk".path;
ipv4Addr = "100.96.143.138";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6260:8f8a";
};
};
}

View File

@ -0,0 +1,50 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "li";
networking.domain = "pop.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services
custom.locations.autoServe = true;
# Networking
## Tailscale
age.secrets."tailscale/li.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/li.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/li.pop.ts.hillion.co.uk".path;
useRoutingFeatures = "server";
extraUpFlags = [ "--advertise-routes" "192.168.1.0/24" ];
};
## Enable ZRAM to make up for 2GB of RAM
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
networking.firewall.interfaces = {
"end0" = {
allowedTCPPorts = [
7654 # Tang
];
};
};
};
}

View File

@ -0,0 +1,75 @@
{ config, pkgs, lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "merlin";
networking.domain = "rig.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "igc" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
boot.kernelPackages = pkgs.linuxPackages_latest;
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
custom.users.jake.password = true;
security.sudo.wheelNeedsPassword = lib.mkForce true;
# Networking
networking = {
interfaces.enp171s0.name = "eth0";
interfaces.enp172s0.name = "eth1";
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
};
};
};
## Tailscale
age.secrets."tailscale/merlin.rig.ts.hillion.co.uk".file = ../../secrets/tailscale/merlin.rig.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/merlin.rig.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,70 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -0,0 +1,28 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "thunderbolt" "nvme" "usbhid" "usb_storage" "sd_mod" "rtsx_pci_sdmmc" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp171s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp172s0.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp173s0f0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -3,7 +3,6 @@
{ {
imports = [ imports = [
./hardware-configuration.nix ./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix ../../modules/rpi/rpi4.nix
]; ];
@ -13,17 +12,17 @@
networking.hostName = "microserver"; networking.hostName = "microserver";
networking.domain = "home.ts.hillion.co.uk"; networking.domain = "home.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services ## Custom Services
custom.locations.autoServe = true; custom.locations.autoServe = true;
# Networking # Networking
## Tailscale ## Tailscale
age.secrets."tailscale/microserver.home.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.home.ts.hillion.co.uk.age; age.secrets."tailscale/microserver.home.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.home.ts.hillion.co.uk.age;
custom.tailscale = { services.tailscale = {
enable = true; enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path; authKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path;
advertiseRoutes = [ "10.64.50.0/24" "10.239.19.0/24" ];
advertiseExitNode = true;
}; };
## Enable IoT VLAN ## Enable IoT VLAN
@ -38,22 +37,17 @@
bluetooth.enable = true; bluetooth.enable = true;
}; };
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
};
## Run a persistent iperf3 server ## Run a persistent iperf3 server
services.iperf3.enable = true; services.iperf3.enable = true;
services.iperf3.openFirewall = true; services.iperf3.openFirewall = true;
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall.interfaces = { networking.firewall.interfaces = {
"eth0" = { "eth0" = {
allowedUDPPorts = [ allowedUDPPorts = [
5353 # HomeKit
]; ];
allowedTCPPorts = [ allowedTCPPorts = [
21063 # HomeKit 7654 # Tang
]; ];
}; };
}; };

View File

@ -1,42 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "22.05";
networking.hostName = "microserver";
networking.domain = "parents.ts.hillion.co.uk";
# Networking
## Tailscale
age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.parents.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".path;
advertiseRoutes = [ "192.168.1.0/24" ];
};
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
};
## Enable ZRAM to make up for 2GB of RAM
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
};
}

View File

@ -0,0 +1,7 @@
# phoenix.st.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/phoenix.st.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,14 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.64.50.21:7654"
},
{
"url": "http://10.64.50.25:7654"
}
]
}
}

View File

@ -0,0 +1,161 @@
{ config, pkgs, lib, ... }:
let
zpool_name = "practical-defiant-coffee";
in
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "phoenix";
networking.domain = "st.ts.hillion.co.uk";
networking.hostId = "4d7241e9";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
"zfs.zfs_arc_max=34359738368"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "igc" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
"disk1-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
custom.users.jake.password = true; # TODO: remove me once booting has stabilised
## Filesystems
boot.supportedFilesystems = [ "zfs" ];
boot.zfs = {
forceImportRoot = false;
extraPools = [ zpool_name ];
};
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# All filesystems includes the BTRFS parts of all the hard drives. This
# would take forever and is redundant as they get fully read regularly.
fileSystems = [ "/data" ];
};
services.zfs.autoScrub = {
enable = true;
interval = "Wed, 02:00";
};
## Resilio
custom.resilio = {
enable = true;
backups.enable = true;
folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
};
services.resilio.directoryRoot = "/${zpool_name}/sync";
## Chia
age.secrets."chia/farmer.key" = {
file = ../../secrets/chia/farmer.key.age;
owner = "chia";
group = "chia";
};
custom.chia = {
enable = true;
keyFile = config.age.secrets."chia/farmer.key".path;
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 8;
};
## Restic
custom.services.restic.path = "/${zpool_name}/backups/restic";
## Backups
### Git
custom.backups.git = {
enable = true;
extraRepos = [ "https://gitea.hillion.co.uk/JakeHillion/nixos.git" ];
};
## Downloads
custom.services.downloads = {
metadataPath = "/${zpool_name}/downloads/metadata";
downloadCachePath = "/${zpool_name}/downloads/torrents";
filmsPath = "/${zpool_name}/media/films";
tvPath = "/${zpool_name}/media/tv";
};
## Plex
users.users.plex.extraGroups = [ "mediaaccess" ];
services.plex.enable = true;
## Networking
networking = {
interfaces.enp4s0.name = "eth0";
interfaces.enp5s0.name = "eth1";
interfaces.enp6s0.name = "eth2";
interfaces.enp8s0.name = "eth3";
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
32400 # Plex
];
allowedUDPPorts = lib.mkForce [ ];
};
};
};
## Tailscale
age.secrets."tailscale/phoenix.st.ts.hillion.co.uk".file = ../../secrets/tailscale/phoenix.st.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/phoenix.st.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,103 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
disk1 = {
type = "disk";
device = "/dev/nvme1n1";
content = {
type = "gpt";
partitions = {
disk1-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk1-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
extraArgs = [
"-d raid1"
"/dev/mapper/disk0-crypt"
];
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -9,23 +9,11 @@
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "usbhid" "usb_storage" "sd_mod" ]; boot.initrd.availableKernelModules = [ "nvme" "ahci" "xhci_pci" "thunderbolt" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ]; boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ]; boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ]; boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "/dev/disk/by-uuid/cb48d4ed-d268-490c-9977-2b5d31ce2c1b";
fsType = "btrfs";
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/BC57-0AF6";
fsType = "vfat";
};
fileSystems."/mnt/d0" = fileSystems."/mnt/d0" =
{ {
device = "/dev/disk/by-uuid/9136434d-d883-4118-bd01-903f720e5ce1"; device = "/dev/disk/by-uuid/9136434d-d883-4118-bd01-903f720e5ce1";
@ -62,14 +50,28 @@
fsType = "btrfs"; fsType = "btrfs";
}; };
swapDevices = [ ]; fileSystems."/mnt/d6" =
{
device = "/dev/disk/by-uuid/b461e07d-39ab-46b4-b1d1-14c2e0791915";
fsType = "btrfs";
};
fileSystems."/mnt/d7" =
{
device = "/dev/disk/by-uuid/eb8d32d0-e506-449b-8dbc-585ba05c4252";
fsType = "btrfs";
};
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking # Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's # (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction # still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`. # with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true; networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp7s0.useDHCP = lib.mkDefault true; # networking.interfaces.enp4s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp5s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s0.useDHCP = lib.mkDefault true;
# networking.interfaces.enp8s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
} }

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -2,7 +2,6 @@
{ {
imports = [ imports = [
../../modules/common/default.nix
./hardware-configuration.nix ./hardware-configuration.nix
]; ];
@ -19,6 +18,8 @@
"net.ipv4.conf.all.forwarding" = true; "net.ipv4.conf.all.forwarding" = true;
}; };
custom.defaults = true;
## Interactive password ## Interactive password
custom.users.jake.password = true; custom.users.jake.password = true;
@ -31,6 +32,14 @@
nat.enable = lib.mkForce false; nat.enable = lib.mkForce false;
useDHCP = false; useDHCP = false;
vlans = {
cameras = {
id = 3;
interface = "eth2";
};
};
interfaces = { interfaces = {
enp1s0 = { enp1s0 = {
name = "eth0"; name = "eth0";
@ -55,6 +64,14 @@
} }
]; ];
}; };
cameras /* cameras@eth2 */ = {
ipv4.addresses = [
{
address = "10.133.145.1";
prefixLength = 24;
}
];
};
enp4s0 = { name = "eth3"; }; enp4s0 = { name = "eth3"; };
enp5s0 = { name = "eth4"; }; enp5s0 = { name = "eth4"; };
enp6s0 = { name = "eth5"; }; enp6s0 = { name = "eth5"; };
@ -81,8 +98,10 @@
ip protocol icmp counter accept comment "accept all ICMP types" ip protocol icmp counter accept comment "accept all ICMP types"
iifname "eth0" ct state { established, related } counter accept iifname "eth0" tcp dport 22 counter accept comment "SSH"
iifname "eth0" drop
iifname { "eth0", "cameras" } ct state { established, related } counter accept
iifname { "eth0", "cameras" } drop
} }
chain forward { chain forward {
@ -91,6 +110,7 @@
iifname { iifname {
"eth1", "eth1",
"eth2", "eth2",
"tailscale0",
} oifname { } oifname {
"eth0", "eth0",
} counter accept comment "Allow trusted LAN to WAN" } counter accept comment "Allow trusted LAN to WAN"
@ -100,19 +120,14 @@
} oifname { } oifname {
"eth1", "eth1",
"eth2", "eth2",
} ct state established,related counter accept comment "Allow established back to LANs" "tailscale0",
} ct state { established,related } counter accept comment "Allow established back to LANs"
ip daddr 10.64.50.20 tcp dport 32400 counter accept comment "Plex" iifname "tailscale0" oifname { "eth1", "eth2" } counter accept comment "Allow LAN access from Tailscale"
iifname { "eth1", "eth2" } oifname "tailscale0" ct state { established,related } counter accept comment "Allow established back to Tailscale"
ip daddr 10.64.50.20 tcp dport 8444 counter accept comment "Chia" ip daddr 10.64.50.27 tcp dport 32400 counter accept comment "Plex"
ip daddr 10.64.50.20 tcp dport 28967 counter accept comment "zfs.tywin.storj" ip daddr 10.64.50.21 tcp dport 7654 counter accept comment "Tang"
ip daddr 10.64.50.20 udp dport 28967 counter accept comment "zfs.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 udp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 udp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28970 counter accept comment "d2.tywin.storj"
ip daddr 10.64.50.20 udp dport 28970 counter accept comment "d2.tywin.storj"
} }
} }
@ -120,22 +135,17 @@
chain prerouting { chain prerouting {
type nat hook prerouting priority filter; policy accept; type nat hook prerouting priority filter; policy accept;
iifname eth0 tcp dport 32400 counter dnat to 10.64.50.20 iifname eth0 tcp dport 32400 counter dnat to 10.64.50.27
iifname eth0 tcp dport 7654 counter dnat to 10.64.50.21
iifname eth0 tcp dport 8444 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28967 counter dnat to 10.64.50.20
iifname eth0 udp dport 28967 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28968 counter dnat to 10.64.50.20
iifname eth0 udp dport 28968 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28969 counter dnat to 10.64.50.20
iifname eth0 udp dport 28969 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28970 counter dnat to 10.64.50.20
iifname eth0 udp dport 28970 counter dnat to 10.64.50.20
} }
chain postrouting { chain postrouting {
type nat hook postrouting priority filter; policy accept; type nat hook postrouting priority filter; policy accept;
oifname "eth0" masquerade oifname "eth0" masquerade
iifname tailscale0 oifname eth1 snat to 10.64.50.1
iifname tailscale0 oifname eth2 snat to 10.239.19.1
} }
} }
''; '';
@ -149,12 +159,42 @@
settings = { settings = {
interfaces-config = { interfaces-config = {
interfaces = [ "eth1" "eth2" ]; interfaces = [ "eth1" "eth2" "cameras" ];
}; };
lease-database = { lease-database = {
type = "memfile"; type = "memfile";
persist = false; persist = true;
name = "/var/lib/kea/dhcp4.leases";
}; };
option-def = [
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
type = "string";
array = false;
}
];
client-classes = [
{
name = "APC";
test = "option[vendor-class-identifier].text == 'APC'";
option-data = [
{
always-send = true;
name = "vendor-encapsulated-options";
}
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
data = "1APC";
}
];
}
];
subnet4 = [ subnet4 = [
{ {
subnet = "10.64.50.0/24"; subnet = "10.64.50.0/24";
@ -173,23 +213,25 @@
} }
{ {
name = "domain-name-servers"; name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8"; data = "10.64.50.1, 1.1.1.1, 8.8.8.8";
}
];
reservations = [
{
# tywin.storage.ts.hillion.co.uk
hw-address = "c8:7f:54:6d:e1:03";
ip-address = "10.64.50.20";
hostname = "tywin";
}
{
# syncbox
hw-address = "00:1e:06:49:06:1e";
ip-address = "10.64.50.22";
hostname = "syncbox";
} }
]; ];
reservations = lib.lists.remove null (lib.lists.imap0
(i: el: if el == null then null else {
ip-address = "10.64.50.${toString (20 + i)}";
inherit (el) hw-address hostname;
}) [
null
{ hostname = "microserver"; hw-address = "e4:5f:01:b4:58:95"; }
{ hostname = "theon"; hw-address = "00:1e:06:49:06:1e"; }
{ hostname = "server-switch"; hw-address = "84:d8:1b:9d:0d:85"; }
{ hostname = "apc-ap7921"; hw-address = "00:c0:b7:6b:f4:34"; }
{ hostname = "sodium"; hw-address = "d8:3a:dd:c3:d6:2b"; }
{ hostname = "gendry"; hw-address = "18:c0:4d:35:60:1e"; }
{ hostname = "phoenix"; hw-address = "a8:b8:e0:04:17:a5"; }
{ hostname = "merlin"; hw-address = "b0:41:6f:13:20:14"; }
{ hostname = "stinger"; hw-address = "7c:83:34:be:30:dd"; }
]);
} }
{ {
subnet = "10.239.19.0/24"; subnet = "10.239.19.0/24";
@ -208,37 +250,113 @@
} }
{ {
name = "domain-name-servers"; name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8"; data = "10.239.19.1, 1.1.1.1, 8.8.8.8";
} }
]; ];
reservations = [ reservations = [
{ {
# bedroom-everything-presence-one
hw-address = "40:22:d8:e0:1d:50"; hw-address = "40:22:d8:e0:1d:50";
ip-address = "10.239.19.2"; ip-address = "10.239.19.2";
hostname = "bedroom-everything-presence-one"; hostname = "bedroom-everything-presence-one";
} }
{ {
# living-room-everything-presence-one
hw-address = "40:22:d8:e0:0f:78"; hw-address = "40:22:d8:e0:0f:78";
ip-address = "10.239.19.3"; ip-address = "10.239.19.3";
hostname = "living-room-everything-presence-one"; hostname = "living-room-everything-presence-one";
} }
{
hw-address = "a0:7d:9c:b0:f0:14";
ip-address = "10.239.19.4";
hostname = "hallway-wall-tablet";
}
{
hw-address = "d8:3a:dd:c3:d6:2b";
ip-address = "10.239.19.5";
hostname = "sodium";
}
{
hw-address = "48:da:35:6f:f2:4b";
ip-address = "10.239.19.6";
hostname = "hammer";
}
{
hw-address = "48:da:35:6f:83:b8";
ip-address = "10.239.19.7";
hostname = "charlie";
}
];
}
{
subnet = "10.133.145.0/24";
interface = "cameras";
pools = [{
pool = "10.133.145.64 - 10.133.145.254";
}];
option-data = [
{
name = "routers";
data = "10.133.145.1";
}
{
name = "broadcast-address";
data = "10.133.145.255";
}
{
name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8";
}
];
reservations = [
]; ];
} }
]; ];
}; };
}; };
}; };
unbound = {
enable = true;
settings = {
server = {
interface = [
"127.0.0.1"
"10.64.50.1"
"10.239.19.1"
];
access-control = [
"10.64.50.0/24 allow"
"10.239.19.0/24 allow"
];
};
forward-zone = [
{
name = ".";
forward-tls-upstream = "yes";
forward-addr = [
"1.1.1.1#cloudflare-dns.com"
"1.0.0.1#cloudflare-dns.com"
"8.8.8.8#dns.google"
"8.8.4.4#dns.google"
];
}
];
};
};
}; };
## Tailscale ## Tailscale
age.secrets."tailscale/router.home.ts.hillion.co.uk".file = ../../secrets/tailscale/router.home.ts.hillion.co.uk.age; age.secrets."tailscale/router.home.ts.hillion.co.uk".file = ../../secrets/tailscale/router.home.ts.hillion.co.uk.age;
custom.tailscale = { services.tailscale = {
enable = true; enable = true;
preAuthKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path; authKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path;
ipv4Addr = "100.105.71.48"; useRoutingFeatures = "server";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730"; extraSetFlags = [
"--advertise-routes"
"10.64.50.0/24,10.239.19.0/24,10.133.145.0/24"
"--advertise-exit-node"
"--netfilter-mode=off"
];
}; };
## Enable btrfs compression ## Enable btrfs compression
@ -262,9 +380,34 @@
}; };
services.caddy = { services.caddy = {
enable = true; enable = true;
virtualHosts."http://graphs.router.home.ts.hillion.co.uk" = { virtualHosts = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ]; "graphs.router.home.ts.hillion.co.uk" = {
extraConfig = "reverse_proxy unix///run/netdata/netdata.sock"; listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy unix///run/netdata/netdata.sock
'';
};
"hammer.kvm.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://10.239.19.6
'';
};
"charlie.kvm.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://10.239.19.7
'';
};
}; };
}; };
users.users.caddy.extraGroups = [ "netdata" ]; users.users.caddy.extraGroups = [ "netdata" ];

View File

@ -0,0 +1,103 @@
{ config, pkgs, lib, nixos-hardware, ... }:
{
imports = [
"${nixos-hardware}/raspberry-pi/5/default.nix"
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "sodium";
networking.domain = "pop.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/tmp
btrfs subvolume snapshot /cache/empty_snapshot /cache/tmp
chmod 1777 /cache/tmp
'';
## CA server
custom.ca.service.enable = true;
### nix only supports build-dir from 2.22. bind mount /tmp to something persistent instead.
fileSystems."/tmp" = {
device = "/cache/tmp";
options = [ "bind" ];
};
# nix = {
# settings = {
# build-dir = "/cache/tmp/";
# };
# };
## Custom Services
custom.locations.autoServe = true;
custom.www.home.enable = true;
custom.www.iot.enable = true;
custom.services.isponsorblocktv.enable = true;
# Networking
networking = {
interfaces.end0.name = "eth0";
vlans = {
iot = {
id = 2;
interface = "eth0";
};
};
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
7654 # Tang
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
iot = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
};
};
## Tailscale
age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/sodium.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".path;
};
};
}

View File

@ -9,9 +9,9 @@
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" "usbhid" "usb_storage" "sr_mod" ]; boot.initrd.availableKernelModules = [ "usbhid" "usb_storage" ];
boot.initrd.kernelModules = [ ]; boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ]; boot.kernelModules = [ ];
boot.extraModulePackages = [ ]; boot.extraModulePackages = [ ];
fileSystems."/" = fileSystems."/" =
@ -21,24 +21,32 @@
options = [ "mode=0755" ]; options = [ "mode=0755" ];
}; };
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/417B-1063";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/nix" = fileSystems."/nix" =
{ {
device = "/dev/disk/by-id/nvme-KXG60ZNV512G_TOSHIBA_106S10VHT9LM_1-part2"; device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs"; fsType = "btrfs";
options = [ "subvol=nix" ]; options = [ "subvol=nix" ];
}; };
fileSystems."/data" = fileSystems."/data" =
{ {
device = "/dev/disk/by-id/nvme-KXG60ZNV512G_TOSHIBA_106S10VHT9LM_1-part2"; device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs"; fsType = "btrfs";
options = [ "subvol=data" ]; options = [ "subvol=data" ];
}; };
fileSystems."/boot" = fileSystems."/cache" =
{ {
device = "/dev/disk/by-uuid/4D7E-8DE8"; device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "vfat"; fsType = "btrfs";
options = [ "subvol=cache" ];
}; };
swapDevices = [ ]; swapDevices = [ ];
@ -48,8 +56,8 @@
# still possible to use this option, but it's recommended to use it in conjunction # still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`. # with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true; networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp5s0.useDHCP = lib.mkDefault true; # networking.interfaces.enu1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlan0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
} }

View File

@ -0,0 +1 @@
aarch64-linux

View File

@ -0,0 +1,84 @@
{ config, pkgs, lib, ... }:
{
imports = [
./disko.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "stinger";
networking.domain = "pop.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
# zswap
"zswap.enabled=1"
"zswap.compressor=zstd"
"zswap.max_pool_percent=20"
];
boot.initrd = {
availableKernelModules = [ "r8169" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
custom.locations.autoServe = true;
custom.impermanence.enable = true;
hardware = {
bluetooth.enable = true;
};
# Networking
networking = {
interfaces.enp1s0.name = "eth0";
vlans = {
iot = {
id = 2;
interface = "eth0";
};
};
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
1400 # HA Sonos
21063 # HomeKit
];
allowedUDPPorts = lib.mkForce [
5353 # HomeKit
];
};
};
};
## Tailscale
age.secrets."tailscale/stinger.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/stinger.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/stinger.pop.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,70 @@
{
disko.devices = {
disk = {
disk0 = {
type = "disk";
device = "/dev/nvme0n1";
content = {
type = "gpt";
partitions = {
ESP = {
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
disk0-crypt = {
size = "100%";
content = {
type = "luks";
name = "disk0-crypt";
settings = {
allowDiscards = true;
};
content = {
type = "btrfs";
subvolumes = {
"/data" = {
mountpoint = "/data";
mountOptions = [ "compress=zstd" "ssd" ];
};
"/nix" = {
mountpoint = "/nix";
mountOptions = [ "compress=zstd" "ssd" ];
};
};
};
};
};
swap = {
size = "64G";
content = {
type = "swap";
randomEncryption = true;
discardPolicy = "both";
};
};
};
};
};
};
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"mode=755"
"size=100%"
];
};
};
};
}

View File

@ -0,0 +1,28 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp0s20f0u2.useDHCP = lib.mkDefault true;
# networking.interfaces.enp1s0.useDHCP = lib.mkDefault true;
# networking.interfaces.wlo1.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -2,7 +2,6 @@
{ {
imports = [ imports = [
../../modules/common/default.nix
./hardware-configuration.nix ./hardware-configuration.nix
]; ];
@ -15,14 +14,18 @@
boot.loader.grub.enable = false; boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true; boot.loader.generic-extlinux-compatible.enable = true;
custom.defaults = true;
## Custom Services ## Custom Services
custom = { custom = {
locations.autoServe = true; locations.autoServe = true;
}; };
## Networking ## Networking
networking.useNetworkd = true;
systemd.network.enable = true; systemd.network.enable = true;
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = { networking.firewall = {
trustedInterfaces = [ "tailscale0" ]; trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [ allowedTCPPorts = lib.mkForce [
@ -39,11 +42,9 @@
## Tailscale ## Tailscale
age.secrets."tailscale/theon.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/theon.storage.ts.hillion.co.uk.age; age.secrets."tailscale/theon.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/theon.storage.ts.hillion.co.uk.age;
custom.tailscale = { services.tailscale = {
enable = true; enable = true;
preAuthKeyFile = config.age.secrets."tailscale/theon.storage.ts.hillion.co.uk".path; authKeyFile = config.age.secrets."tailscale/theon.storage.ts.hillion.co.uk".path;
ipv4Addr = "100.104.142.22";
ipv6Addr = "fd7a:115c:a1e0::4aa8:8e16";
}; };
## Packages ## Packages

View File

@ -1,223 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "22.11";
networking.hostName = "tywin";
networking.domain = "storage.ts.hillion.co.uk";
networking.hostId = "2a9b6df5";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.locations.autoServe = true;
## Tailscale
age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/tywin.storage.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".path;
ipv4Addr = "100.115.31.91";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6273:1f5b";
};
## Filesystems
fileSystems."/".options = [ "compress=zstd" ];
boot.supportedFilesystems = [ "zfs" ];
boot.zfs = {
forceImportRoot = false;
extraPools = [ "data" ];
};
boot.kernelParams = [ "zfs.zfs_arc_max=25769803776" ];
services.zfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
};
## Backups
### Git
age.secrets."git/git_backups_ecdsa".file = ../../secrets/git/git_backups_ecdsa.age;
age.secrets."git/git_backups_remotes".file = ../../secrets/git/git_backups_remotes.age;
custom.backups.git = {
enable = true;
sshKey = config.age.secrets."git/git_backups_ecdsa".path;
reposFile = config.age.secrets."git/git_backups_remotes".path;
repos = [ "https://gitea.hillion.co.uk/JakeHillion/nixos.git" ];
};
## Resilio
custom.resilio.enable = true;
services.resilio.deviceName = "tywin.storage";
services.resilio.directoryRoot = "/data/users/jake/sync";
services.resilio.storagePath = "/data/users/jake/sync/.sync";
custom.resilio.folders =
let
folderNames = [
"dad"
"joseph"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/plain/${name}";
file = ../../secrets/resilio/plain/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
age.secrets."resilio/restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "rslsync";
group = "rslsync";
};
services.restic.backups."sync" = {
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G";
user = "rslsync";
passwordFile = config.age.secrets."resilio/restic/128G.key".path;
timerConfig = {
Persistent = true;
OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m";
};
paths = [ "/data/users/jake/sync" ];
exclude = [
"/data/users/jake/sync/.sync"
"/data/users/jake/sync/*/.sync"
"/data/users/jake/sync/resources/media/films"
"/data/users/jake/sync/resources/media/iso"
"/data/users/jake/sync/resources/media/tv"
"/data/users/jake/sync/dad/media"
];
};
## Restic
age.secrets."restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "restic";
group = "restic";
};
age.secrets."restic/1.6T.key" = {
file = ../../secrets/restic/1.6T.age;
owner = "restic";
group = "restic";
};
services.restic.server = {
enable = true;
appendOnly = true;
extraFlags = [ "--no-auth" ];
dataDir = "/data/backups/restic";
listenAddress = "127.0.0.1:8000"; # TODO: can this be a Unix socket?
};
services.caddy = {
enable = true;
virtualHosts."http://restic.tywin.storage.ts.hillion.co.uk".extraConfig = ''
bind ${config.custom.tailscale.ipv4Addr} ${config.custom.tailscale.ipv6Addr}
reverse_proxy http://localhost:8000
'';
};
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
services.restic.backups."prune-128G" = {
repository = "/data/backups/restic/128G";
user = "restic";
passwordFile = config.age.secrets."restic/128G.key".path;
timerConfig = {
Persistent = true;
OnCalendar = "02:30";
RandomizedDelaySec = "1h";
};
pruneOpts = [
"--keep-last 48"
"--keep-within-hourly 7d"
"--keep-within-daily 1m"
"--keep-within-weekly 6m"
"--keep-within-monthly 24m"
];
};
services.restic.backups."prune-1.6T" = {
repository = "/data/backups/restic/1.6T";
user = "restic";
passwordFile = config.age.secrets."restic/1.6T.key".path;
timerConfig = {
Persistent = true;
OnCalendar = "Wed, 02:30";
RandomizedDelaySec = "4h";
};
pruneOpts = [
"--keep-within-daily 14d"
"--keep-within-weekly 2m"
"--keep-within-monthly 18m"
];
};
## Chia
age.secrets."chia/farmer.key" = {
file = ../../secrets/chia/farmer.key.age;
owner = "chia";
group = "chia";
};
custom.chia = {
enable = true;
openFirewall = true;
keyFile = config.age.secrets."chia/farmer.key".path;
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 7;
};
## Downloads
custom.services.downloads = {
metadataPath = "/data/downloads/metadata";
downloadCachePath = "/data/downloads/torrents";
filmsPath = "/data/media/films";
tvPath = "/data/media/tv";
};
## Plex
users.users.plex.extraGroups = [ "mediaaccess" ];
services.plex = {
enable = true;
openFirewall = true;
};
## Firewall
networking.firewall.interfaces."tailscale0".allowedTCPPorts = [
80 # Caddy (restic.tywin.storage.ts.)
14002 # Storj Dashboard (d0.)
14003 # Storj Dashboard (d1.)
14004 # Storj Dashboard (d2.)
14005 # Storj Dashboard (d3.)
];
};
}

View File

@ -2,7 +2,7 @@
{ {
imports = [ imports = [
./git.nix ./git/default.nix
./homeassistant.nix ./homeassistant.nix
./matrix.nix ./matrix.nix
]; ];

View File

@ -7,25 +7,17 @@ in
options.custom.backups.git = { options.custom.backups.git = {
enable = lib.mkEnableOption "git"; enable = lib.mkEnableOption "git";
repos = lib.mkOption { extraRepos = lib.mkOption {
description = "A list of remotes to clone."; description = "A list of remotes to clone.";
type = with lib.types; listOf str; type = with lib.types; listOf str;
default = [ ]; default = [ ];
}; };
reposFile = lib.mkOption {
description = "A file containing the remotes to clone, one per line.";
type = with lib.types; nullOr str;
default = null;
};
sshKey = lib.mkOption {
description = "SSH private key to use when cloning repositories over SSH.";
type = with lib.types; nullOr str;
default = null;
};
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
age.secrets."git-backups/restic/128G".file = ../../secrets/restic/128G.age; age.secrets."git/git_backups_ecdsa".file = ../../../secrets/git/git_backups_ecdsa.age;
age.secrets."git/git_backups_remotes".file = ../../../secrets/git/git_backups_remotes.age;
age.secrets."git-backups/restic/128G".file = ../../../secrets/restic/128G.age;
systemd.services.backup-git = { systemd.services.backup-git = {
description = "Git repo backup service."; description = "Git repo backup service.";
@ -37,9 +29,10 @@ in
WorkingDirectory = "%C/backup-git"; WorkingDirectory = "%C/backup-git";
LoadCredential = [ LoadCredential = [
"id_ecdsa:${config.age.secrets."git/git_backups_ecdsa".path}"
"repos_file:${config.age.secrets."git/git_backups_remotes".path}"
"restic_password:${config.age.secrets."git-backups/restic/128G".path}" "restic_password:${config.age.secrets."git-backups/restic/128G".path}"
] ++ (if cfg.sshKey == null then [ ] else [ "id_ecdsa:${cfg.sshKey}" ]) ];
++ (if cfg.reposFile == null then [ ] else [ "repos_file:${cfg.reposFile}" ]);
}; };
environment = { environment = {
@ -48,11 +41,12 @@ in
}; };
script = '' script = ''
set -x
shopt -s nullglob shopt -s nullglob
# Read and deduplicate repos # Read and deduplicate repos
${if cfg.reposFile == null then "" else "readarray -t raw_repos < $CREDENTIALS_DIRECTORY/repos_file"} readarray -t raw_repos < $CREDENTIALS_DIRECTORY/repos_file
declare -A repos=(${builtins.concatStringsSep " " (builtins.map (x : "[${x}]=1") cfg.repos)}) declare -A repos=(${builtins.concatStringsSep " " (builtins.map (x : "[${x}]=1") cfg.extraRepos)})
for repo in ''${raw_repos[@]}; do repos[$repo]=1; done for repo in ''${raw_repos[@]}; do repos[$repo]=1; done
# Clean up existing repos # Clean up existing repos
@ -79,7 +73,7 @@ in
# Backup to Restic # Backup to Restic
${pkgs.restic}/bin/restic \ ${pkgs.restic}/bin/restic \
-r rest:http://restic.tywin.storage.ts.hillion.co.uk/128G \ -r rest:https://restic.ts.hillion.co.uk/128G \
--cache-dir .restic --exclude .restic \ --cache-dir .restic --exclude .restic \
backup . backup .
@ -93,9 +87,9 @@ in
wantedBy = [ "timers.target" ]; wantedBy = [ "timers.target" ];
timerConfig = { timerConfig = {
Persistent = true; Persistent = true;
OnBootSec = "10m";
OnUnitInactiveSec = "15m"; OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m"; RandomizedDelaySec = "5m";
Unit = "backup-git.service";
}; };
}; };
}; };

View File

@ -0,0 +1 @@
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIc3WVROMCifYtqHRWf5gZAOQFdpbcSYOC0JckKzUVM5sGdXtw3VXNiVqY3npdMizS4e1V8Hh77UecD3q9CLkMA= backups-git@nixos

View File

@ -14,19 +14,44 @@ in
owner = "hass"; owner = "hass";
group = "hass"; group = "hass";
}; };
age.secrets."backups/homeassistant/restic/1.6T" = {
file = ../../secrets/restic/1.6T.age;
owner = "postgres";
group = "postgres";
};
services = { services = {
restic.backups."homeassistant" = { postgresqlBackup = {
user = "hass"; enable = true;
timerConfig = { compression = "none"; # for better diffing
OnCalendar = "03:00"; databases = [ "homeassistant" ];
RandomizedDelaySec = "60m"; };
restic.backups = {
"homeassistant-config" = {
user = "hass";
timerConfig = {
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:https://restic.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/homeassistant/restic/128G".path;
paths = [
config.services.home-assistant.configDir
];
};
"homeassistant-database" = {
user = "postgres";
timerConfig = {
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:https://restic.ts.hillion.co.uk/1.6T";
passwordFile = config.age.secrets."backups/homeassistant/restic/1.6T".path;
paths = [
"${config.services.postgresqlBackup.location}/homeassistant.sql"
];
}; };
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/homeassistant/restic/128G".path;
paths = [
config.services.home-assistant.configDir
];
}; };
}; };
}; };

View File

@ -24,7 +24,7 @@ in
OnCalendar = "03:00"; OnCalendar = "03:00";
RandomizedDelaySec = "60m"; RandomizedDelaySec = "60m";
}; };
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G"; repository = "rest:https://restic.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/matrix/restic/128G".path; passwordFile = config.age.secrets."backups/matrix/restic/128G".path;
paths = [ paths = [
"${config.services.postgresqlBackup.location}/matrix-synapse.sql" "${config.services.postgresqlBackup.location}/matrix-synapse.sql"

11
modules/ca/README.md Normal file
View File

@ -0,0 +1,11 @@
# ca
Getting the certificates in the right place is a manual process (for now, at least). This is to keep the most control over the root certificate's key and allow manual cycling. The manual commands should be run on a trusted machine.
Creating a 10 year root certificate:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME' cert.pem key.pem --kty=EC --curve=P-521 --profile=root-ca --not-after=87600h
Creating the intermediate key:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME (sodium.pop.ts.hillion.co.uk)' intermediate_cert.pem intermediate_key.pem --kty=EC --curve=P-521 --profile=intermediate-ca --not-after=8760h --ca=$NIXOS_ROOT/modules/ca/cert.pem --ca-key=DOWNLOADED_KEY.pem

13
modules/ca/cert.pem Normal file
View File

@ -0,0 +1,13 @@
-----BEGIN CERTIFICATE-----
MIIB+TCCAVqgAwIBAgIQIZdaIUsuJdjnu7DQP1N8oTAKBggqhkjOPQQDBDAXMRUw
EwYDVQQDEwxIaWxsaW9uIEFDTUUwHhcNMjQwODAxMjIyMjEwWhcNMzQwNzMwMjIy
MjEwWjAXMRUwEwYDVQQDEwxIaWxsaW9uIEFDTUUwgZswEAYHKoZIzj0CAQYFK4EE
ACMDgYYABAAJI3z1PrV97EFc1xaENcr6ML1z6xdXTy+ReHtf42nWsw+c3WDKzJ45
+xHJ/p2BTOR5+NQ7RGQQ68zmFJnEYTYDogAw6U9YzxxDGlG1HlgnZ9PPmXoF+PFl
Zy2WZCiDPx5KDJcjTPzLV3ITt4fl3PMA12BREVeonvrvRLcpVrMfS2b7wKNFMEMw
DgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwHQYDVR0OBBYEFFBT
fMT0uUbS+lVUbGKK8/SZHPISMAoGCCqGSM49BAMEA4GMADCBiAJCAPNIwrQztPrN
MaHB3J0lNVODIGwQWblt99vnjqIWOKJhgckBxaElyInsyt8dlnmTCpOCJdY4BA+K
Nr87AfwIWdAaAkIBV5i4zXPXVKblGKnmM0FomFSbq2cYE3pmi5BO1StakH1kEHlf
vbkdwFgkw2MlARp0Ka3zbWivBG9zjPoZtsL/8tk=
-----END CERTIFICATE-----

14
modules/ca/consumer.nix Normal file
View File

@ -0,0 +1,14 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.consumer;
in
{
options.custom.ca.consumer = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
security.pki.certificates = [ (builtins.readFile ./cert.pem) ];
};
}

8
modules/ca/default.nix Normal file
View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./consumer.nix
./service.nix
];
}

48
modules/ca/service.nix Normal file
View File

@ -0,0 +1,48 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.service;
in
{
options.custom.ca.service = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
users.users.step-ca.uid = config.ids.uids.step-ca;
users.groups.step-ca.gid = config.ids.gids.step-ca;
services.step-ca = {
enable = true;
address = config.custom.dns.tailscale.ipv4;
port = 8443;
intermediatePasswordFile = "/data/system/ca/intermediate.psk";
settings = {
root = ./cert.pem;
crt = "/data/system/ca/intermediate.crt";
key = "/data/system/ca/intermediate.pem";
dnsNames = [ "ca.ts.hillion.co.uk" ];
logger = { format = "text"; };
db = {
type = "badgerv2";
dataSource = "/var/lib/step-ca/db";
};
authority = {
provisioners = [
{
type = "ACME";
name = "acme";
}
];
};
};
};
};
}

View File

@ -22,8 +22,8 @@ in
default = null; default = null;
}; };
plotDirectories = lib.mkOption { plotDirectories = lib.mkOption {
type = with lib.types; nullOr (listOf str); type = with lib.types; listOf str;
default = null; default = [ ];
}; };
openFirewall = lib.mkOption { openFirewall = lib.mkOption {
type = lib.types.bool; type = lib.types.bool;
@ -46,7 +46,7 @@ in
}; };
virtualisation.oci-containers.containers.chia = { virtualisation.oci-containers.containers.chia = {
image = "ghcr.io/chia-network/chia:2.1.4"; image = "ghcr.io/chia-network/chia:2.4.3";
ports = [ "8444" ]; ports = [ "8444" ];
extraOptions = [ extraOptions = [
"--uidmap=0:${toString config.users.users.chia.uid}:1" "--uidmap=0:${toString config.users.users.chia.uid}:1"
@ -62,6 +62,11 @@ in
}; };
}; };
systemd.tmpfiles.rules = [
"d ${cfg.path} 0700 chia chia - -"
"d ${cfg.path}/.chia 0700 chia chia - -"
];
networking.firewall = lib.mkIf cfg.openFirewall { networking.firewall = lib.mkIf cfg.openFirewall {
allowedTCPPorts = [ 8444 ]; allowedTCPPorts = [ 8444 ];
}; };

View File

@ -1,60 +0,0 @@
{ pkgs, lib, config, agenix, ... }:
{
imports = [
../home/default.nix
./shell.nix
./ssh.nix
./update_scripts.nix
];
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users."jake" = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
gh
git
htop
nix
sapling
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
custom.hostinfo.enable = true;
}

View File

@ -1,38 +0,0 @@
{ pkgs, lib, config, ... }:
{
users.users."jake".openssh.authorizedKeys.keys = [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo"
];
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHosts = {
# Global Internet hosts
"ssh.gitea.hillion.co.uk".publicKey = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==";
# Tailscale hosts
"dancefloor.dancefloor.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY";
"gendry.jakehillion.terminals.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c";
"homeassistant.homeassistant.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux";
"microserver.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw";
"microserver.parents.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0cjjNQPnJwpu4wcYmvfjB1jlIfZwMxT+3nBusoYQFr";
"router.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu";
"tywin.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGATsjWO0qZNFp2BhfgDuWi+e/ScMkFxp79N2OZoed1k";
};
programs.ssh.knownHostsFiles = [ ./github_known_hosts ];
}

View File

@ -3,19 +3,25 @@
{ {
imports = [ imports = [
./backups/default.nix ./backups/default.nix
./ca/default.nix
./chia.nix ./chia.nix
./common/hostinfo.nix ./defaults.nix
./desktop/awesome/default.nix ./desktop/awesome/default.nix
./dns.nix
./home/default.nix
./hostinfo.nix
./ids.nix ./ids.nix
./impermanence.nix ./impermanence.nix
./locations.nix ./locations.nix
./prometheus/default.nix
./resilio.nix ./resilio.nix
./sched_ext.nix
./services/default.nix ./services/default.nix
./shell/default.nix
./ssh/default.nix
./storj.nix ./storj.nix
./tailscale.nix
./users.nix ./users.nix
./www/global.nix ./www/default.nix
./www/www-repo.nix
]; ];
options.custom = { options.custom = {

70
modules/defaults.nix Normal file
View File

@ -0,0 +1,70 @@
{ pkgs, nixpkgs-unstable, lib, config, agenix, ... }:
{
options.custom.defaults = lib.mkEnableOption "defaults";
config = lib.mkIf config.custom.defaults {
hardware.enableAllFirmware = true;
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users.${config.custom.user} = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
uid = config.ids.uids.${config.custom.user};
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
gh
git
htop
nix
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
nix.registry.nixpkgs-unstable.to = {
type = "path";
path = nixpkgs-unstable;
};
# Delegation
custom.ca.consumer.enable = true;
custom.dns.enable = true;
custom.home.defaults = true;
custom.hostinfo.enable = true;
custom.prometheus.client.enable = true;
custom.shell.enable = true;
custom.ssh.enable = true;
};
}

124
modules/dns.nix Normal file
View File

@ -0,0 +1,124 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.dns;
in
{
options.custom.dns = {
enable = lib.mkEnableOption "dns";
authoritative = {
ipv4 = lib.mkOption {
description = "authoritative ipv4 mappings";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "authoritative ipv6 mappings";
readOnly = true;
};
};
tailscale =
{
ipv4 = lib.mkOption {
description = "tailscale ipv4 address";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "tailscale ipv6 address";
readOnly = true;
};
};
};
config = lib.mkIf cfg.enable {
custom.dns.authoritative = {
ipv4 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "100.113.188.46";
};
home = {
microserver = "100.105.131.47";
router = "100.105.71.48";
};
jakehillion-terminals = { gendry = "100.70.100.77"; };
lt = { be = "100.105.166.79"; };
pop = {
li = "100.106.87.35";
sodium = "100.87.188.4";
stinger = "100.117.89.126";
};
rig = {
merlin = "100.69.181.56";
};
st = {
phoenix = "100.92.37.106";
};
storage = {
theon = "100.104.142.22";
};
};
};
};
};
};
ipv6 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "fd7a:115c:a1e0::2a01:bc2f";
};
home = {
microserver = "fd7a:115c:a1e0:ab12:4843:cd96:6269:832f";
router = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730";
};
jakehillion-terminals = { gendry = "fd7a:115c:a1e0:ab12:4843:cd96:6246:644d"; };
lt = { be = "fd7a:115c:a1e0::9001:a64f"; };
pop = {
li = "fd7a:115c:a1e0::e701:5723";
sodium = "fd7a:115c:a1e0::3701:bc04";
stinger = "fd7a:115c:a1e0::8401:597e";
};
rig = {
merlin = "fd7a:115c:a1e0::8d01:b538";
};
st = {
phoenix = "fd7a:115c:a1e0::6901:256a";
};
storage = {
theon = "fd7a:115c:a1e0::4aa8:8e16";
};
};
};
};
};
};
};
custom.dns.tailscale =
let
lookupFqdn = lib.attrsets.attrByPath (lib.reverseList (lib.splitString "." config.networking.fqdn)) null;
in
{
ipv4 = lookupFqdn cfg.authoritative.ipv4;
ipv6 = lookupFqdn cfg.authoritative.ipv6;
};
networking.hosts =
let
mkHosts = hosts:
(lib.collect (x: (builtins.hasAttr "name" x && builtins.hasAttr "value" x))
(lib.mapAttrsRecursive
(path: value:
lib.nameValuePair value [ (lib.concatStringsSep "." (lib.reverseList path)) ])
hosts));
in
builtins.listToAttrs (mkHosts cfg.authoritative.ipv4 ++ mkHosts cfg.authoritative.ipv6);
};
}

View File

@ -3,24 +3,47 @@
{ {
imports = [ imports = [
./git.nix ./git.nix
./neovim.nix
./tmux/default.nix ./tmux/default.nix
]; ];
config = { options.custom.home.defaults = lib.mkEnableOption "home";
home-manager = {
users.root.home = {
stateVersion = "22.11";
## Set an empty ZSH config and defer to the global one config = lib.mkIf config.custom.home.defaults {
file.".zshrc".text = ""; home-manager =
let
stateVersion = if (builtins.compareVersions config.system.stateVersion "24.05") > 0 then config.system.stateVersion else "22.11";
in
{
users.root.home = {
inherit stateVersion;
## Set an empty ZSH config and defer to the global one
file.".zshrc".text = "";
};
users."${config.custom.user}" = {
home = {
inherit stateVersion;
};
services = {
ssh-agent.enable = true;
};
programs = {
zoxide = {
enable = true;
options = [ "--cmd cd" ];
};
zsh.enable = true;
};
};
}; };
users."${config.custom.user}".home = { # Delegation
stateVersion = "22.11"; custom.home.git.enable = true;
custom.home.neovim.enable = true;
## Set an empty ZSH config and defer to the global one custom.home.tmux.enable = true;
file.".zshrc".text = "";
};
};
}; };
} }

View File

@ -1,21 +1,43 @@
{ pkgs, lib, config, ... }: { pkgs, lib, config, ... }:
let
cfg = config.custom.home.git;
in
{ {
home-manager.users.jake.programs.git = { options.custom.home.git = {
enable = true; enable = lib.mkEnableOption "git";
extraConfig = { };
user = {
email = "jake@hillion.co.uk"; config = lib.mkIf cfg.enable {
name = "Jake Hillion"; home-manager.users.jake.programs = {
sapling = lib.mkIf (config.custom.user == "jake") {
enable = true;
userName = "Jake Hillion";
userEmail = "jake@hillion.co.uk";
extraConfig = {
ui = {
"merge:interactive" = ":merge3";
};
};
}; };
pull = {
rebase = true; git = lib.mkIf (config.custom.user == "jake") {
}; enable = true;
merge = { userName = "Jake Hillion";
conflictstyle = "diff3"; userEmail = "jake@hillion.co.uk";
};
init = { extraConfig = {
defaultBranch = "main"; pull = {
rebase = true;
};
merge = {
conflictstyle = "diff3";
};
init = {
defaultBranch = "main";
};
};
}; };
}; };
}; };

82
modules/home/neovim.nix Normal file
View File

@ -0,0 +1,82 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.neovim;
in
{
options.custom.home.neovim = {
enable = lib.mkEnableOption "neovim";
};
config = lib.mkIf config.custom.home.neovim.enable {
home-manager.users."${config.custom.user}".programs.neovim = {
enable = true;
viAlias = true;
vimAlias = true;
plugins = with pkgs.vimPlugins; [
a-vim
dracula-nvim
telescope-nvim
];
extraLuaConfig = ''
-- Logical options
vim.opt.splitright = true
vim.opt.splitbelow = true
vim.opt.ignorecase = true
vim.opt.smartcase = true
vim.opt.expandtab = true
vim.opt.tabstop = 2
vim.opt.shiftwidth = 2
-- Appearance
vim.cmd[[colorscheme dracula-soft]]
vim.opt.number = true
vim.opt.relativenumber = true
-- Telescope
require('telescope').setup({
pickers = {
find_files = {
find_command = {
"${pkgs.fd}/bin/fd",
"--type=f",
"--strip-cwd-prefix",
"--no-require-git",
"--hidden",
"--exclude=.sl",
},
},
},
defaults = {
vimgrep_arguments = {
"${pkgs.ripgrep}/bin/rg",
"--color=never",
"--no-heading",
"--with-filename",
"--line-number",
"--column",
"--smart-case",
"--no-require-git",
"--hidden",
"--glob=!.sl",
},
},
})
-- Key bindings
vim.g.mapleader = ","
--- Key bindings: Telescope
local telescope_builtin = require('telescope.builtin')
vim.keymap.set('n', '<leader>ff', telescope_builtin.find_files, {})
vim.keymap.set('n', '<leader>fg', telescope_builtin.live_grep, {})
vim.keymap.set('n', '<leader>fb', telescope_builtin.buffers, {})
vim.keymap.set('n', '<leader>fh', telescope_builtin.help_tags, {})
'';
};
};
}

View File

@ -1,10 +1,25 @@
setw -g mouse on setw -g mouse on
# Large history
set -g history-limit 500000
# Bindings # Bindings
bind C-Y set-window-option synchronize-panes bind C-Y set-window-option synchronize-panes
bind -n C-k clear-history bind -n C-k clear-history
# Status pane
set -g status-right-length 100
set -g status-right "#(uname -r) • #(hostname -f | sed 's/\.ts\.hillion\.co\.uk//g') • %d-%b-%y %H:%M"
# New panes in the same directory # New panes in the same directory
bind '"' split-window -c "#{pane_current_path}" bind '"' split-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}" bind % split-window -h -c "#{pane_current_path}"
bind c new-window -c "#{pane_current_path}" bind c new-window -c "#{pane_current_path}"
# Start indices at 1 to match keyboard
set -g base-index 1
setw -g pane-base-index 1
# Open a new session when attached to and one isn't open
# Must come after base-index settings
new-session

View File

@ -1,8 +1,17 @@
{ pkgs, lib, config, ... }: { pkgs, lib, config, ... }:
let
cfg = config.custom.home.tmux;
in
{ {
home-manager.users.jake.programs.tmux = { options.custom.home.tmux = {
enable = true; enable = lib.mkEnableOption "tmux";
extraConfig = lib.readFile ./.tmux.conf; };
config = lib.mkIf cfg.enable {
home-manager.users.jake.programs.tmux = {
enable = true;
extraConfig = lib.readFile ./.tmux.conf;
};
}; };
} }

View File

@ -6,6 +6,10 @@
## Defined System Users (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix) ## Defined System Users (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183; unifi = 183;
chia = 185; chia = 185;
gitea = 186;
node-exporter = 188;
step-ca = 198;
isponsorblocktv = 199;
## Consistent People ## Consistent People
jake = 1000; jake = 1000;
@ -15,6 +19,10 @@
## Defined System Groups (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix) ## Defined System Groups (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183; unifi = 183;
chia = 185; chia = 185;
gitea = 186;
node-exporter = 188;
step-ca = 198;
isponsorblocktv = 199;
## Consistent Groups ## Consistent Groups
mediaaccess = 1200; mediaaccess = 1200;

View File

@ -2,7 +2,6 @@
let let
cfg = config.custom.impermanence; cfg = config.custom.impermanence;
listIf = (enable: x: if enable then x else [ ]);
in in
{ {
options.custom.impermanence = { options.custom.impermanence = {
@ -12,6 +11,13 @@ in
type = lib.types.str; type = lib.types.str;
default = "/data"; default = "/data";
}; };
cache = {
enable = lib.mkEnableOption "impermanence.cache";
path = lib.mkOption {
type = lib.types.str;
default = "/cache";
};
};
users = lib.mkOption { users = lib.mkOption {
type = with lib.types; listOf str; type = with lib.types; listOf str;
@ -40,37 +46,76 @@ in
gitea.stateDir = "${cfg.base}/system/var/lib/gitea"; gitea.stateDir = "${cfg.base}/system/var/lib/gitea";
}; };
environment.persistence."${cfg.base}/system" = { custom.chia = lib.mkIf config.custom.chia.enable {
hideMounts = true; path = lib.mkOverride 999 "/data/chia";
directories = [
"/etc/nixos"
] ++ (listIf config.custom.tailscale.enable [ "/var/lib/tailscale" ]) ++
(listIf config.services.zigbee2mqtt.enable [ config.services.zigbee2mqtt.dataDir ]) ++
(listIf config.services.postgresql.enable [ config.services.postgresql.dataDir ]) ++
(listIf config.hardware.bluetooth.enable [ "/var/lib/bluetooth" ]) ++
(listIf config.custom.services.unifi.enable [ "/var/lib/unifi" ]) ++
(listIf (config.virtualisation.oci-containers.containers != { }) [ "/var/lib/containers" ]);
}; };
services.resilio = lib.mkIf config.services.resilio.enable {
directoryRoot = lib.mkOverride 999 "${cfg.base}/sync";
};
services.plex = lib.mkIf config.services.plex.enable {
dataDir = lib.mkOverride 999 "/data/plex";
};
services.home-assistant = lib.mkIf config.services.home-assistant.enable {
configDir = lib.mkOverride 999 "/data/home-assistant";
};
environment.persistence = lib.mkMerge [
{
"${cfg.base}/system" = {
hideMounts = true;
directories = [
"/etc/nixos"
] ++ (lib.lists.optional config.services.tailscale.enable "/var/lib/tailscale") ++
(lib.lists.optional config.services.zigbee2mqtt.enable config.services.zigbee2mqtt.dataDir) ++
(lib.lists.optional config.services.postgresql.enable config.services.postgresql.dataDir) ++
(lib.lists.optional config.hardware.bluetooth.enable "/var/lib/bluetooth") ++
(lib.lists.optional config.custom.services.unifi.enable "/var/lib/unifi") ++
(lib.lists.optional (config.virtualisation.oci-containers.containers != { }) "/var/lib/containers") ++
(lib.lists.optional config.services.tang.enable "/var/lib/private/tang") ++
(lib.lists.optional config.services.caddy.enable "/var/lib/caddy") ++
(lib.lists.optional config.services.prometheus.enable "/var/lib/${config.services.prometheus.stateDir}") ++
(lib.lists.optional config.custom.services.isponsorblocktv.enable "${config.custom.services.isponsorblocktv.dataDir}") ++
(lib.lists.optional config.services.step-ca.enable "/var/lib/step-ca/db");
};
}
(lib.mkIf cfg.cache.enable {
"${cfg.cache.path}/system" = {
hideMounts = true;
directories = (lib.lists.optional config.services.postgresqlBackup.enable config.services.postgresqlBackup.location);
};
})
];
home-manager.users = home-manager.users =
let let
mkUser = (x: { mkUser = (x:
name = x; let
value = { homeCfg = config.home-manager.users."${x}";
home = { in
persistence."/data/users/${x}" = { {
allowOther = false; name = x;
value = {
home = {
persistence."/data/users/${x}" = {
allowOther = false;
files = cfg.userExtraFiles.${x} or [ ]; files = cfg.userExtraFiles.${x} or [ ];
directories = cfg.userExtraDirs.${x} or [ ]; directories = cfg.userExtraDirs.${x} or [ ];
};
sessionVariables = lib.attrsets.optionalAttrs homeCfg.programs.zoxide.enable { _ZO_DATA_DIR = "/data/users/${x}/.local/share/zoxide"; };
};
programs = {
zsh.history.path = lib.mkOverride 999 "/data/users/${x}/.zsh_history";
}; };
file.".zshrc".text = lib.mkForce ''
HISTFILE=/data/users/${x}/.zsh_history
'';
}; };
}; });
});
in in
builtins.listToAttrs (builtins.map mkUser cfg.users); builtins.listToAttrs (builtins.map mkUser cfg.users);

View File

@ -11,25 +11,43 @@ in
}; };
locations = lib.mkOption { locations = lib.mkOption {
default = { readOnly = true;
services = {
downloads = "tywin.storage.ts.hillion.co.uk";
gitea = "jorah.cx.ts.hillion.co.uk";
homeassistant = "microserver.home.ts.hillion.co.uk";
mastodon = "";
matrix = "jorah.cx.ts.hillion.co.uk";
unifi = "jorah.cx.ts.hillion.co.uk";
};
};
}; };
}; };
config = lib.mkIf cfg.autoServe { config = lib.mkMerge [
custom.services.downloads.enable = cfg.locations.services.downloads == config.networking.fqdn; {
custom.services.gitea.enable = cfg.locations.services.gitea == config.networking.fqdn; custom.locations.locations = {
custom.services.homeassistant.enable = cfg.locations.services.homeassistant == config.networking.fqdn; services = {
custom.services.mastodon.enable = cfg.locations.services.mastodon == config.networking.fqdn; authoritative_dns = [ "boron.cx.ts.hillion.co.uk" ];
custom.services.matrix.enable = cfg.locations.services.matrix == config.networking.fqdn; downloads = "phoenix.st.ts.hillion.co.uk";
custom.services.unifi.enable = cfg.locations.services.unifi == config.networking.fqdn; gitea = "boron.cx.ts.hillion.co.uk";
}; homeassistant = "stinger.pop.ts.hillion.co.uk";
mastodon = "";
matrix = "boron.cx.ts.hillion.co.uk";
prometheus = "boron.cx.ts.hillion.co.uk";
restic = "phoenix.st.ts.hillion.co.uk";
tang = [
"li.pop.ts.hillion.co.uk"
"microserver.home.ts.hillion.co.uk"
"sodium.pop.ts.hillion.co.uk"
];
unifi = "boron.cx.ts.hillion.co.uk";
version_tracker = [ "boron.cx.ts.hillion.co.uk" ];
};
};
}
(lib.mkIf cfg.autoServe
{
custom.services = lib.mapAttrsRecursive
(path: value: {
enable =
if builtins.isList value
then builtins.elem config.networking.fqdn value
else config.networking.fqdn == value;
})
cfg.locations.services;
})
];
} }

View File

@ -0,0 +1,24 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.prometheus.client;
in
{
options.custom.prometheus.client = {
enable = lib.mkEnableOption "prometheus-client";
};
config = lib.mkIf cfg.enable {
users.users.node-exporter.uid = config.ids.uids.node-exporter;
users.groups.node-exporter.gid = config.ids.gids.node-exporter;
services.prometheus.exporters.node = {
enable = true;
port = 9000;
enabledCollectors = [
"systemd"
];
};
};
}

View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./client.nix
./service.nix
];
}

View File

@ -0,0 +1,67 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.services.prometheus;
in
{
options.custom.services.prometheus = {
enable = lib.mkEnableOption "prometheus-client";
};
config = lib.mkIf cfg.enable {
services.prometheus = {
enable = true;
globalConfig = {
scrape_interval = "15s";
};
retentionTime = "1y";
scrapeConfigs = [{
job_name = "node";
static_configs = [{
targets = builtins.map (x: "${x}:9000") (builtins.attrNames (builtins.readDir ../../hosts));
}];
}];
rules = [
''
groups:
- name: service alerting
rules:
- alert: ResilioSyncDown
expr: node_systemd_unit_state{ name = 'resilio.service', state != 'active' } > 0
for: 10m
annotations:
summary: "Resilio Sync systemd service is down"
description: "The Resilio Sync systemd service is not active on instance {{ $labels.instance }}."
''
];
};
services.caddy = {
enable = true;
virtualHosts."prometheus.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
reverse_proxy http://localhost:9090
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
'';
};
};
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
};
}

View File

@ -19,50 +19,93 @@ in
type = with lib.types; uniq (listOf attrs); type = with lib.types; uniq (listOf attrs);
default = [ ]; default = [ ];
}; };
};
config = lib.mkIf cfg.enable { backups = {
users.users = enable = lib.mkEnableOption "resilio.backups";
let
mkUser =
(user: {
name = user;
value = {
extraGroups = [ "rslsync" ];
};
});
in
builtins.listToAttrs (builtins.map mkUser cfg.extraUsers);
age.secrets =
let
mkSecret = (secret: {
name = secret.name;
value = {
file = secret.file;
owner = "rslsync";
group = "rslsync";
};
});
in
builtins.listToAttrs (builtins.map (folder: mkSecret folder.secret) cfg.folders);
services.resilio = {
enable = true;
sharedFolders =
let
mkFolder = name: secret: {
directory = "${config.services.resilio.directoryRoot}/${name}";
secretFile = "${config.age.secrets."${secret.name}".path}";
knownHosts = [ ];
searchLAN = true;
useDHT = true;
useRelayServer = true;
useSyncTrash = false;
useTracker = true;
};
in
builtins.map (folder: mkFolder folder.name folder.secret) cfg.folders;
}; };
}; };
config = lib.mkIf cfg.enable (lib.mkMerge [
{
users.users =
let
mkUser =
(user: {
name = user;
value = {
extraGroups = [ "rslsync" ];
};
});
in
builtins.listToAttrs (builtins.map mkUser cfg.extraUsers);
age.secrets =
let
mkSecret = (secret: {
name = secret.name;
value = {
file = secret.file;
owner = "rslsync";
group = "rslsync";
};
});
in
builtins.listToAttrs (builtins.map (folder: mkSecret folder.secret) cfg.folders);
services.resilio = {
enable = true;
deviceName = lib.mkOverride 999 (lib.strings.concatStringsSep "." (lib.lists.take 2 (lib.strings.splitString "." config.networking.fqdnOrHostName)));
storagePath = lib.mkOverride 999 "${config.services.resilio.directoryRoot}/.sync";
sharedFolders =
let
mkFolder = name: secret: {
directory = "${config.services.resilio.directoryRoot}/${name}";
secretFile = "${config.age.secrets."${secret.name}".path}";
knownHosts = [ ];
searchLAN = true;
useDHT = true;
useRelayServer = true;
useSyncTrash = false;
useTracker = true;
};
in
builtins.map (folder: mkFolder folder.name folder.secret) cfg.folders;
};
systemd.services.resilio.unitConfig.RequiresMountsFor = builtins.map (folder: "${config.services.resilio.directoryRoot}/${folder.name}") cfg.folders;
}
(lib.mkIf cfg.backups.enable {
age.secrets."resilio/restic/128G.key" = {
file = ../secrets/restic/128G.age;
owner = "rslsync";
group = "rslsync";
};
services.restic.backups."resilio" = {
repository = "rest:https://restic.ts.hillion.co.uk/128G";
user = "rslsync";
passwordFile = config.age.secrets."resilio/restic/128G.key".path;
timerConfig = {
OnBootSec = "10m";
OnUnitInactiveSec = "15m";
RandomizedDelaySec = "5m";
};
paths = [ config.services.resilio.directoryRoot ];
exclude = [
"${config.services.resilio.directoryRoot}/.sync"
"${config.services.resilio.directoryRoot}/*/.sync"
"${config.services.resilio.directoryRoot}/resources/media/films"
"${config.services.resilio.directoryRoot}/resources/media/iso"
"${config.services.resilio.directoryRoot}/resources/media/tv"
"${config.services.resilio.directoryRoot}/dad/media"
];
};
})
]);
} }

22
modules/sched_ext.nix Normal file
View File

@ -0,0 +1,22 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.sched_ext;
in
{
options.custom.sched_ext = {
enable = lib.mkEnableOption "sched_ext";
};
config = lib.mkIf cfg.enable {
assertions = [{
assertion = config.boot.kernelPackages.kernelAtLeast "6.12";
message = "sched_ext requires a kernel >=6.12";
}];
boot.kernelPackages = if pkgs.linuxPackages.kernelAtLeast "6.12" then pkgs.linuxPackages else (if pkgs.linuxPackages_latest.kernelAtLeast "6.12" then pkgs.linuxPackages_latest else pkgs.unstable.linuxPackages_testing);
environment.systemPackages = with pkgs; [ unstable.scx.layered unstable.scx.lavd ];
};
}

View File

@ -0,0 +1,56 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.services.authoritative_dns;
in
{
options.custom.services.authoritative_dns = {
enable = lib.mkEnableOption "authoritative_dns";
};
config = lib.mkIf cfg.enable {
services.nsd = {
enable = true;
zones = {
"ts.hillion.co.uk" = {
data =
let
makeRecords = type: s: (lib.concatStringsSep "\n" (lib.collect builtins.isString (lib.mapAttrsRecursive (path: value: "${lib.concatStringsSep "." (lib.reverseList path)} 86400 ${type} ${value}") s)));
in
''
$ORIGIN ts.hillion.co.uk.
$TTL 86400
ts.hillion.co.uk. IN SOA ns1.hillion.co.uk. hostmaster.hillion.co.uk. (
1 ;Serial
7200 ;Refresh
3600 ;Retry
1209600 ;Expire
3600 ;Negative response caching TTL
)
86400 NS ns1.hillion.co.uk.
ca 21600 CNAME sodium.pop.ts.hillion.co.uk.
restic 21600 CNAME ${config.custom.locations.locations.services.restic}.
prometheus 21600 CNAME ${config.custom.locations.locations.services.prometheus}.
deluge.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
prowlarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
radarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
sonarr.downloads 21600 CNAME ${config.custom.locations.locations.services.downloads}.
graphs.router.home 21600 CNAME router.home.ts.hillion.co.uk.
zigbee2mqtt.home 21600 CNAME router.home.ts.hillion.co.uk.
charlie.kvm 21600 CNAME router.home.ts.hillion.co.uk.
hammer.kvm 21600 CNAME router.home.ts.hillion.co.uk.
'' + (makeRecords "A" config.custom.dns.authoritative.ipv4.uk.co.hillion.ts) + "\n\n" + (makeRecords "AAAA" config.custom.dns.authoritative.ipv6.uk.co.hillion.ts);
};
};
};
};
}

View File

@ -2,11 +2,15 @@
{ {
imports = [ imports = [
./authoritative_dns.nix
./downloads.nix ./downloads.nix
./gitea.nix ./gitea/default.nix
./homeassistant.nix ./homeassistant.nix
./isponsorblocktv.nix
./mastodon/default.nix ./mastodon/default.nix
./matrix.nix ./matrix.nix
./restic.nix
./tang.nix
./unifi.nix ./unifi.nix
./version_tracker.nix ./version_tracker.nix
./zigbee2mqtt.nix ./zigbee2mqtt.nix

View File

@ -24,27 +24,33 @@ in
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.caddy = {
enable = true;
virtualHosts = builtins.listToAttrs (builtins.map
(x: {
name = "http://${x}.downloads.ts.hillion.co.uk";
value = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
extraConfig = "reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock";
};
}) [ "prowlarr" "sonarr" "radarr" "deluge" ]);
};
## Wireguard
age.secrets."wireguard/downloads".file = ../../secrets/wireguard/downloads.age; age.secrets."wireguard/downloads".file = ../../secrets/wireguard/downloads.age;
age.secrets."deluge/auth" = { age.secrets."deluge/auth" = {
file = ../../secrets/deluge/auth.age; file = ../../secrets/deluge/auth.age;
owner = "deluge"; owner = "deluge";
}; };
services.caddy = {
enable = true;
virtualHosts = builtins.listToAttrs (builtins.map
(x: {
name = "${x}.downloads.ts.hillion.co.uk";
value = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
'';
};
}) [ "prowlarr" "sonarr" "radarr" "deluge" ]);
};
## Wireguard
networking.wireguard.interfaces."downloads" = { networking.wireguard.interfaces."downloads" = {
privateKeyFile = config.age.secrets."wireguard/downloads".path; privateKeyFile = config.age.secrets."wireguard/downloads".path;
ips = [ "10.2.0.2/32" ]; ips = [ "10.2.0.2/32" ];
@ -132,7 +138,10 @@ in
script = with pkgs; "${iproute2}/bin/ip link set up lo"; script = with pkgs; "${iproute2}/bin/ip link set up lo";
}; };
networking.hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; }; networking = {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; };
};
services = { services = {
prowlarr.enable = true; prowlarr.enable = true;

View File

@ -0,0 +1,105 @@
{ config, lib, pkgs, ... }:
let
cfg = config.custom.services.gitea.actions;
in
{
options.custom.services.gitea.actions = {
enable = lib.mkEnableOption "gitea-actions";
labels = lib.mkOption {
type = with lib.types; listOf str;
default = [
"ubuntu-latest:docker://node:16-bullseye"
"ubuntu-20.04:docker://node:16-bullseye"
];
};
tokenSecret = lib.mkOption {
type = lib.types.path;
};
};
config = lib.mkIf cfg.enable {
age.secrets."gitea/actions/token".file = cfg.tokenSecret;
# Run gitea-actions in a container and firewall it such that it can only
# access the Internet (not private networks).
containers."gitea-actions" = {
autoStart = true;
ephemeral = true;
privateNetwork = true; # all traffic goes through ve-gitea-actions on the host
hostAddress = "10.108.27.1";
localAddress = "10.108.27.2";
extraFlags = [
# Extra system calls required to nest Docker, taken from https://wiki.archlinux.org/title/systemd-nspawn
"--system-call-filter=add_key"
"--system-call-filter=keyctl"
"--system-call-filter=bpf"
];
bindMounts = let tokenPath = config.age.secrets."gitea/actions/token".path; in {
"${tokenPath}".hostPath = tokenPath;
};
timeoutStartSec = "5min";
config = (hostConfig: ({ config, pkgs, ... }: {
config = let cfg = hostConfig.custom.services.gitea.actions; in {
system.stateVersion = "23.11";
virtualisation.docker.enable = true;
services.gitea-actions-runner.instances.container = {
enable = true;
url = "https://gitea.hillion.co.uk";
tokenFile = hostConfig.age.secrets."gitea/actions/token".path;
name = "${hostConfig.networking.hostName}";
labels = cfg.labels;
settings = {
runner = {
capacity = 3;
};
cache = {
enabled = true;
host = "10.108.27.2";
port = 41919;
};
};
};
# Drop any packets to private networks
networking = {
firewall.enable = lib.mkForce false;
nftables = {
enable = true;
ruleset = ''
table inet filter {
chain output {
type filter hook output priority 100; policy accept;
ct state { established, related } counter accept
ip daddr 10.0.0.0/8 drop
ip daddr 100.64.0.0/10 drop
ip daddr 172.16.0.0/12 drop
ip daddr 192.168.0.0/16 drop
}
}
'';
};
};
};
})) config;
};
networking.nat = {
enable = true;
externalInterface = "eth0";
internalIPs = [ "10.108.27.2" ];
};
};
}

View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./actions.nix
./gitea.nix
];
}

View File

@ -1,4 +1,4 @@
{ config, pkgs, lib, nixpkgs-unstable, ... }: { config, pkgs, lib, ... }:
let let
cfg = config.custom.services.gitea; cfg = config.custom.services.gitea;
@ -20,39 +20,42 @@ in
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
age.secrets = { age.secrets = {
"gitea/mailer_password" = { "gitea/mailer_password" = {
file = ../../secrets/gitea/mailer_password.age; file = ../../../secrets/gitea/mailer_password.age;
owner = config.services.gitea.user; owner = config.services.gitea.user;
group = config.services.gitea.group; group = config.services.gitea.group;
}; };
"gitea/oauth_jwt_secret" = { "gitea/oauth_jwt_secret" = {
file = ../../secrets/gitea/oauth_jwt_secret.age; file = ../../../secrets/gitea/oauth_jwt_secret.age;
owner = config.services.gitea.user; owner = config.services.gitea.user;
group = config.services.gitea.group; group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/oauth2_jwt_secret"; path = "${config.services.gitea.customDir}/conf/oauth2_jwt_secret";
}; };
"gitea/lfs_jwt_secret" = { "gitea/lfs_jwt_secret" = {
file = ../../secrets/gitea/lfs_jwt_secret.age; file = ../../../secrets/gitea/lfs_jwt_secret.age;
owner = config.services.gitea.user; owner = config.services.gitea.user;
group = config.services.gitea.group; group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/lfs_jwt_secret"; path = "${config.services.gitea.customDir}/conf/lfs_jwt_secret";
}; };
"gitea/security_secret_key" = { "gitea/security_secret_key" = {
file = ../../secrets/gitea/security_secret_key.age; file = ../../../secrets/gitea/security_secret_key.age;
owner = config.services.gitea.user; owner = config.services.gitea.user;
group = config.services.gitea.group; group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/secret_key"; path = "${config.services.gitea.customDir}/conf/secret_key";
}; };
"gitea/security_internal_token" = { "gitea/security_internal_token" = {
file = ../../secrets/gitea/security_internal_token.age; file = ../../../secrets/gitea/security_internal_token.age;
owner = config.services.gitea.user; owner = config.services.gitea.user;
group = config.services.gitea.group; group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/internal_token"; path = "${config.services.gitea.customDir}/conf/internal_token";
}; };
}; };
users.users.gitea.uid = config.ids.uids.gitea;
users.groups.gitea.gid = config.ids.gids.gitea;
services.gitea = { services.gitea = {
enable = true; enable = true;
package = nixpkgs-unstable.legacyPackages.x86_64-linux.gitea; package = pkgs.unstable.gitea;
mailerPasswordFile = config.age.secrets."gitea/mailer_password".path; mailerPasswordFile = config.age.secrets."gitea/mailer_password".path;
appName = "Hillion Gitea"; appName = "Hillion Gitea";
@ -97,18 +100,14 @@ in
}; };
}; };
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = 1;
"net.ipv6.conf.all.forwarding" = 1;
};
networking.firewall.extraCommands = '' networking.firewall.extraCommands = ''
# proxy all traffic on public interface to the gitea SSH server # proxy all traffic on public interface to the gitea SSH server
iptables -A PREROUTING -t nat -i enp5s0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort} iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A PREROUTING -t nat -i enp5s0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort} ip6tables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
# proxy locally originating outgoing packets # proxy locally originating outgoing packets
iptables -A OUTPUT -d 95.217.229.104 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort} iptables -A OUTPUT -d 138.201.252.214 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A OUTPUT -d 2a01:4f9:4b:3953::2 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort} ip6tables -A OUTPUT -d 2a01:4f8:173:23d2::2 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
''; '';
}; };
} }

View File

@ -44,27 +44,52 @@ in
"bluetooth" "bluetooth"
"default_config" "default_config"
"esphome" "esphome"
"flux" "fully_kiosk"
"google_assistant" "google_assistant"
"homekit" "homekit"
"met" "met"
"mobile_app" "mobile_app"
"mqtt" "mqtt"
"otp" "otp"
"smartthings"
"sonos"
"sun" "sun"
"switchbot" "switchbot"
"waze_travel_time"
];
customComponents = with pkgs.home-assistant-custom-components; [
adaptive_lighting
]; ];
config = { config = {
default_config = { }; default_config = { };
homeassistant = {
auth_providers = [
{ type = "homeassistant"; }
{
type = "trusted_networks";
trusted_networks = [ "10.239.19.4/32" ];
trusted_users = {
"10.239.19.4" = "fb4979873ecb480d9e3bb336250fa344";
};
allow_bypass_login = true;
}
];
};
recorder = { recorder = {
db_url = "postgresql://@/homeassistant"; db_url = "postgresql://@/homeassistant";
}; };
http = { http = {
use_x_forwarded_for = true; use_x_forwarded_for = true;
trusted_proxies = [ "100.96.143.138" ]; trusted_proxies = with config.custom.dns.authoritative; [
ipv4.uk.co.hillion.ts.cx.boron
ipv6.uk.co.hillion.ts.cx.boron
ipv4.uk.co.hillion.ts.pop.sodium
ipv6.uk.co.hillion.ts.pop.sodium
];
}; };
google_assistant = { google_assistant = {
@ -76,6 +101,9 @@ in
report_state = true; report_state = true;
expose_by_default = true; expose_by_default = true;
exposed_domains = [ "light" ]; exposed_domains = [ "light" ];
entity_config = {
"input_boolean.sleep_mode" = { };
};
}; };
homekit = [{ homekit = [{
filter = { filter = {
@ -85,25 +113,19 @@ in
bluetooth = { }; bluetooth = { };
switch = [ adaptive_lighting = {
{ lights = [
platform = "flux"; "light.bedroom_lamp"
start_time = "07:00"; "light.bedroom_light"
stop_time = "23:59"; "light.cubby_light"
mode = "mired"; "light.desk_lamp"
disable_brightness_adjust = true; "light.hallway_light"
lights = [ "light.living_room_lamp"
"light.bedroom_lamp" "light.living_room_light"
"light.bedroom_light" "light.wardrobe_light"
"light.cubby_light" ];
"light.desk_lamp" min_sunset_time = "21:00";
"light.hallway_light" };
"light.living_room_lamp"
"light.living_room_light"
"light.wardrobe_light"
];
}
];
light = [ light = [
{ {
@ -111,12 +133,9 @@ in
lights = { lights = {
bathroom_light = { bathroom_light = {
unique_id = "87a4cbb5-e5a7-44fd-9f28-fec2d6a62538"; unique_id = "87a4cbb5-e5a7-44fd-9f28-fec2d6a62538";
value_template = "on"; value_template = "{{ false if state_attr('script.bathroom_light_switch_if_on', 'last_triggered') > states.sensor.bathroom_motion_sensor_illuminance_lux.last_reported else states('sensor.bathroom_motion_sensor_illuminance_lux') | int > 500 }}";
turn_on = { service = "script.noop"; }; turn_on = { service = "script.noop"; };
turn_off = { turn_off = { service = "script.bathroom_light_switch_if_on"; };
service = "switch.turn_on";
entity_id = "switch.bathroom_light";
};
}; };
}; };
} }
@ -145,6 +164,13 @@ in
} }
]; ];
input_boolean = {
sleep_mode = {
name = "Set house to sleep mode";
icon = "mdi:sleep";
};
};
# UI managed expansions # UI managed expansions
automation = "!include automations.yaml"; automation = "!include automations.yaml";
script = "!include scripts.yaml"; script = "!include scripts.yaml";

View File

@ -0,0 +1,62 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.isponsorblocktv;
ver = "v2.2.1";
ctl = pkgs.writeScriptBin "isponsorblocktv-config" ''
#! ${pkgs.runtimeShell}
set -e
sudo systemctl stop podman-isponsorblocktv
sudo ${pkgs.podman}/bin/podman run \
--rm -it \
--uidmap=0:${toString config.users.users.isponsorblocktv.uid}:1 \
--gidmap=0:${toString config.users.groups.isponsorblocktv.gid}:1 \
-v ${cfg.dataDir}:/app/data \
ghcr.io/dmunozv04/isponsorblocktv:${ver} \
--setup-cli
sudo systemctl start podman-isponsorblocktv
'';
in
{
options.custom.services.isponsorblocktv = {
enable = lib.mkEnableOption "isponsorblocktv";
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/isponsorblocktv";
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ ctl ];
users.groups.isponsorblocktv = {
gid = config.ids.gids.isponsorblocktv;
};
users.users.isponsorblocktv = {
home = cfg.dataDir;
createHome = true;
isSystemUser = true;
group = "isponsorblocktv";
uid = config.ids.uids.isponsorblocktv;
};
virtualisation.oci-containers.containers.isponsorblocktv = {
image = "ghcr.io/dmunozv04/isponsorblocktv:${ver}";
extraOptions = [
"--uidmap=0:${toString config.users.users.isponsorblocktv.uid}:1"
"--gidmap=0:${toString config.users.groups.isponsorblocktv.gid}:1"
];
volumes = [ "${cfg.dataDir}:/app/data" ];
};
systemd.tmpfiles.rules = [
"d ${cfg.dataDir} 0700 isponsorblocktv isponsorblocktv - -"
];
};
}

View File

@ -41,6 +41,10 @@ in
owner = "matrix-synapse"; owner = "matrix-synapse";
group = "matrix-synapse"; group = "matrix-synapse";
}; };
"matrix/matrix.hillion.co.uk/syncv3_secret" = {
file = ../../secrets/matrix/matrix.hillion.co.uk/syncv3_secret.age;
};
}; };
services = { services = {
@ -76,8 +80,8 @@ in
x_forwarded = true; x_forwarded = true;
bind_addresses = [ bind_addresses = [
"::1" "::1"
config.custom.tailscale.ipv4Addr config.custom.dns.tailscale.ipv4
config.custom.tailscale.ipv6Addr config.custom.dns.tailscale.ipv6
]; ];
resources = [ resources = [
{ {
@ -114,6 +118,15 @@ in
}; };
}; };
matrix-sliding-sync = {
enable = true;
environmentFile = config.age.secrets."matrix/matrix.hillion.co.uk/syncv3_secret".path;
settings = {
SYNCV3_SERVER = "https://matrix.hillion.co.uk";
SYNCV3_BINDADDR = "[::]:8009";
};
};
heisenbridge = lib.mkIf cfg.heisenbridge { heisenbridge = lib.mkIf cfg.heisenbridge {
enable = true; enable = true;
owner = "@jake:hillion.co.uk"; owner = "@jake:hillion.co.uk";

306
modules/services/restic.nix Normal file
View File

@ -0,0 +1,306 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.restic;
in
{
options.custom.services.restic = {
enable = lib.mkEnableOption "restic http server";
path = lib.mkOption {
type = lib.types.path;
default = "/var/lib/restic";
};
repos = lib.mkOption {
readOnly = true;
type = with lib.types; attrsOf (submodule {
options = {
path = lib.mkOption {
default = null;
type = nullOr str;
};
passwordFile = lib.mkOption {
default = null;
type = nullOr str;
};
environmentFile = lib.mkOption {
default = null;
type = nullOr str;
};
forgetConfig = lib.mkOption {
default = null;
type = nullOr (submodule {
options = {
timerConfig = lib.mkOption {
type = attrs;
};
opts = lib.mkOption {
type = listOf str;
};
};
});
};
clones = lib.mkOption {
default = [ ];
type = listOf (submodule {
options = {
timerConfig = lib.mkOption {
type = attrs;
};
repo = lib.mkOption {
type = str;
};
};
});
};
};
});
default = {
"128G" = {
path = "${cfg.path}/128G";
passwordFile = config.age.secrets."restic/128G.key".path;
forgetConfig = {
timerConfig = {
OnCalendar = "02:30";
RandomizedDelaySec = "1h";
};
opts = [
"--keep-last 48"
"--keep-within-hourly 7d"
"--keep-within-daily 1m"
"--keep-within-weekly 6m"
"--keep-within-monthly 24m"
];
};
clones = [
{
repo = "128G-wasabi";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
{
repo = "128G-backblaze";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
];
};
"1.6T" = {
path = "${cfg.path}/1.6T";
passwordFile = config.age.secrets."restic/1.6T.key".path;
forgetConfig = {
timerConfig = {
OnCalendar = "Wed, 02:30";
RandomizedDelaySec = "4h";
};
opts = [
"--keep-within-daily 14d"
"--keep-within-weekly 2m"
"--keep-within-monthly 18m"
];
};
clones = [
{
repo = "1.6T-wasabi";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
{
repo = "1.6T-backblaze";
timerConfig = {
OnBootSec = "30m";
OnUnitInactiveSec = "60m";
RandomizedDelaySec = "20m";
};
}
];
};
"128G-wasabi" = {
environmentFile = config.age.secrets."restic/128G-wasabi.env".path;
};
"1.6T-wasabi" = {
environmentFile = config.age.secrets."restic/1.6T-wasabi.env".path;
};
"128G-backblaze" = {
environmentFile = config.age.secrets."restic/128G-backblaze.env".path;
};
"1.6T-backblaze" = {
environmentFile = config.age.secrets."restic/1.6T-backblaze.env".path;
};
};
};
};
config = lib.mkIf cfg.enable {
age.secrets = {
"restic/128G.key" = {
file = ../../secrets/restic/128G.age;
owner = "restic";
group = "restic";
};
"restic/128G-wasabi.env".file = ../../secrets/restic/128G-wasabi.env.age;
"restic/128G-backblaze.env".file = ../../secrets/restic/128G-backblaze.env.age;
"restic/1.6T.key" = {
file = ../../secrets/restic/1.6T.age;
owner = "restic";
group = "restic";
};
"restic/1.6T-wasabi.env".file = ../../secrets/restic/1.6T-wasabi.env.age;
"restic/1.6T-backblaze.env".file = ../../secrets/restic/1.6T-backblaze.env.age;
};
services.restic.server = {
enable = true;
appendOnly = true;
extraFlags = [ "--no-auth" ];
dataDir = cfg.path;
listenAddress = "127.0.0.1:8000"; # TODO: can this be a Unix socket?
};
services.caddy = {
enable = true;
virtualHosts."restic.ts.hillion.co.uk".extraConfig = ''
bind ${config.custom.dns.tailscale.ipv4} ${config.custom.dns.tailscale.ipv6}
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://localhost:8000
'';
};
systemd =
let
mkRepoInfo = repo_cfg: (if (repo_cfg.passwordFile != null) then {
serviceConfig.LoadCredential = [
"password_file:${repo_cfg.passwordFile}"
];
environment = {
RESTIC_REPOSITORY = repo_cfg.path;
RESTIC_PASSWORD_FILE = "%d/password_file";
};
} else {
serviceConfig.EnvironmentFile = repo_cfg.environmentFile;
});
mkForgetService = name: repo_cfg:
if (repo_cfg.forgetConfig != null) then
({
description = "Restic forget service for ${name}";
serviceConfig = {
User = "restic";
Group = "restic";
};
script = ''
set -xe
${pkgs.restic}/bin/restic forget ${lib.strings.concatStringsSep " " repo_cfg.forgetConfig.opts} \
--prune \
--retry-lock 30m
'';
} // (mkRepoInfo repo_cfg)) else { };
mkForgetTimer = repo_cfg:
if (repo_cfg.forgetConfig != null) then {
wantedBy = [ "timers.target" ];
timerConfig = repo_cfg.forgetConfig.timerConfig;
} else { };
mkCloneService = from_repo: clone_cfg: to_repo: {
name = "restic-clone-${from_repo.name}-${to_repo.name}";
value = lib.mkMerge [
{
description = "Restic copy from ${from_repo.name} to ${to_repo.name}";
serviceConfig = {
User = "restic";
Group = "restic";
LoadCredential = [
"from_password_file:${from_repo.cfg.passwordFile}"
];
};
environment = {
RESTIC_FROM_PASSWORD_FILE = "%d/from_password_file";
};
script = ''
set -xe
${pkgs.restic}/bin/restic copy \
--from-repo ${from_repo.cfg.path} \
--retry-lock 30m
'';
}
(mkRepoInfo to_repo.cfg)
];
};
mkCloneTimer = from_repo: clone_cfg: to_repo: {
name = "restic-clone-${from_repo.name}-${to_repo.name}";
value = {
wantedBy = [ "timers.target" ];
timerConfig = clone_cfg.timerConfig;
};
};
mapClones = fn: builtins.listToAttrs (lib.lists.flatten (lib.mapAttrsToList
(
from_repo_name: from_repo_cfg: (builtins.map
(
clone_cfg: (fn
{ name = from_repo_name; cfg = from_repo_cfg; }
clone_cfg
{ name = clone_cfg.repo; cfg = cfg.repos."${clone_cfg.repo}"; }
)
)
from_repo_cfg.clones)
)
cfg.repos));
in
{
services = {
caddy = {
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
}
// lib.mapAttrs' (name: value: lib.attrsets.nameValuePair ("restic-forget-" + name) (mkForgetService name value)) cfg.repos
// mapClones mkCloneService;
timers = lib.mapAttrs' (name: value: lib.attrsets.nameValuePair ("restic-forget-" + name) (mkForgetTimer value)) cfg.repos
// mapClones mkCloneTimer;
};
};
}

23
modules/services/tang.nix Normal file
View File

@ -0,0 +1,23 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.tang;
in
{
options.custom.services.tang = {
enable = lib.mkEnableOption "tang";
};
config = lib.mkIf cfg.enable {
services.tang = {
enable = true;
ipAddressAllow = [
"138.201.252.214/32"
"10.64.50.26/32"
"10.64.50.27/32"
"10.64.50.28/32"
"10.64.50.29/32"
];
};
};
}

View File

@ -10,20 +10,14 @@ in
dataDir = lib.mkOption { dataDir = lib.mkOption {
type = lib.types.str; type = lib.types.str;
default = "/var/lib/unifi"; default = "/var/lib/unifi";
readOnly = true; # NixOS module only supports this directory
}; };
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
users.users.unifi = { # Fix dynamically allocated user and group ids
uid = config.ids.uids.unifi; users.users.unifi.uid = config.ids.uids.unifi;
isSystemUser = true; users.groups.unifi.gid = config.ids.gids.unifi;
group = "unifi";
description = "UniFi controller daemon user";
home = "${cfg.dataDir}";
};
users.groups.unifi = {
gid = config.ids.gids.unifi;
};
services.caddy = { services.caddy = {
enable = true; enable = true;
@ -38,21 +32,9 @@ in
}; };
}; };
virtualisation.oci-containers.containers = { services.unifi = {
"unifi" = { enable = true;
image = "lscr.io/linuxserver/unifi-controller:8.0.24-ls221"; unifiPackage = pkgs.unifi8;
environment = {
PUID = toString config.ids.uids.unifi;
PGID = toString config.ids.gids.unifi;
TZ = "Etc/UTC";
};
volumes = [ "${cfg.dataDir}:/config" ];
ports = [
"8080:8080"
"8443:8443"
"3478:3478/udp"
];
};
}; };
}; };
} }

View File

@ -23,7 +23,7 @@ in
enable = true; enable = true;
virtualHosts."http://zigbee2mqtt.home.ts.hillion.co.uk" = { virtualHosts."http://zigbee2mqtt.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ]; listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = "reverse_proxy http://127.0.0.1:15606"; extraConfig = "reverse_proxy http://127.0.0.1:15606";
}; };
}; };
@ -75,7 +75,7 @@ in
}; };
services.restic.backups."zigbee2mqtt" = lib.mkIf cfg.backup { services.restic.backups."zigbee2mqtt" = lib.mkIf cfg.backup {
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/1.6T"; repository = "rest:https://restic.ts.hillion.co.uk/1.6T";
user = "zigbee2mqtt"; user = "zigbee2mqtt";
passwordFile = config.age.secrets."resilio/zigbee2mqtt/1.6T.key".path; passwordFile = config.age.secrets."resilio/zigbee2mqtt/1.6T.key".path;

View File

@ -1,7 +1,20 @@
{ pkgs, lib, config, ... }: { pkgs, lib, config, ... }:
let
cfg = config.custom.shell;
in
{ {
config = { imports = [
./update_scripts.nix
];
options.custom.shell = {
enable = lib.mkEnableOption "shell";
};
config = lib.mkIf cfg.enable {
custom.shell.update_scripts.enable = true;
users.defaultUserShell = pkgs.zsh; users.defaultUserShell = pkgs.zsh;
environment.systemPackages = with pkgs; [ direnv ]; environment.systemPackages = with pkgs; [ direnv ];

View File

@ -1,6 +1,8 @@
{ config, pkgs, lib, ... }: { config, pkgs, lib, ... }:
let let
cfg = config.custom.shell.update_scripts;
update = pkgs.writeScriptBin "update" '' update = pkgs.writeScriptBin "update" ''
#! ${pkgs.runtimeShell} #! ${pkgs.runtimeShell}
set -e set -e
@ -50,7 +52,11 @@ let
''; '';
in in
{ {
config = { options.custom.shell.update_scripts = {
enable = lib.mkEnableOption "update_scripts";
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ environment.systemPackages = [
update update
]; ];

View File

@ -1,25 +0,0 @@
{ config, pkgs, lib, ... }:
{
config.age.secrets."spotify/11132032266" = {
file = ../../secrets/spotify/11132032266.age;
owner = "jake";
};
config.hardware.pulseaudio.enable = true;
config.users.users.jake.extraGroups = [ "audio" ];
config.users.users.jake.packages = with pkgs; [ spotify-tui ];
config.home-manager.users.jake.services.spotifyd = {
enable = true;
settings = {
global = {
username = "11132032266";
password_cmd = "cat ${config.age.secrets."spotify/11132032266".path}";
backend = "pulseaudio";
};
};
};
}

56
modules/ssh/default.nix Normal file
View File

@ -0,0 +1,56 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.ssh;
in
{
options.custom.ssh = {
enable = lib.mkEnableOption "ssh";
};
config = lib.mkIf cfg.enable {
users.users =
if config.custom.user == "jake" then {
"jake".openssh.authorizedKeys.keys = [
"sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBBwJH4udKNvi9TjOBgkxpBBy7hzWqmP0lT5zE9neusCpQLIiDhr6KXYMPXWXdZDc18wH1OLi2+639dXOvp8V/wgAAAAEc3NoOg== jake@beryllium-keys"
"sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBPPJtW19jOaUsjmxc0+QibaLJ3J3yxPXSXZXwKT0Ean6VeaH5G8zG+zjt1Y6sg2d52lHgrRfeVl1xrG/UGX8qWoAAAAEc3NoOg== jakehillion@jakehillion-mbp"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo"
];
} else { };
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHosts = {
# Global Internet hosts
"ssh.gitea.hillion.co.uk".publicKey = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==";
# Tailscale hosts
"boron.cx.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtcJ7HY/vjtheMV8EN2wlTw1hU53CJebGIeRJcSkzt5";
"be.lt.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILV3OSUT+cqFqrFHZGfn7/xi5FW3n1qjUFy8zBbYs2Sm";
"dancefloor.dancefloor.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY";
"gendry.jakehillion.terminals.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c";
"homeassistant.homeassistant.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux";
"li.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQWgcDFL9UZBDKHPiEGepT1Qsc4gz3Pee0/XVHJ6V6u";
"microserver.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw";
"router.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu";
"sodium.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQmG7v/XrinPmkTU2eIoISuU3+hoV4h60Bmbwd+xDjr";
"theon.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN59psLVu3/sQORA4x3p8H3ei8MCQlcwX5T+k3kBeBMf";
};
programs.ssh.knownHostsFiles = [ ./github_known_hosts ];
};
}

View File

@ -1,65 +0,0 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.tailscale;
in
{
options.custom.tailscale = {
enable = lib.mkEnableOption "tailscale";
preAuthKeyFile = lib.mkOption {
type = lib.types.str;
};
advertiseRoutes = lib.mkOption {
type = with lib.types; listOf str;
default = [ ];
};
advertiseExitNode = lib.mkOption {
type = lib.types.bool;
default = false;
};
ipv4Addr = lib.mkOption { type = lib.types.str; };
ipv6Addr = lib.mkOption { type = lib.types.str; };
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.tailscale ];
services.tailscale.enable = true;
networking.firewall.checkReversePath = lib.mkIf cfg.advertiseExitNode "loose";
systemd.services.tailscale-autoconnect = {
description = "Automatic connection to Tailscale";
# make sure tailscale is running before trying to connect to tailscale
after = [ "network-pre.target" "tailscale.service" ];
wants = [ "network-pre.target" "tailscale.service" ];
wantedBy = [ "multi-user.target" ];
# set this service as a oneshot job
serviceConfig.Type = "oneshot";
# have the job run this shell script
script = with pkgs; ''
# wait for tailscaled to settle
sleep 2
# check if we are already authenticated to tailscale
status="$(${tailscale}/bin/tailscale status -json | ${jq}/bin/jq -r .BackendState)"
if [ $status = "Running" ]; then # if so, then do nothing
exit 0
fi
# otherwise authenticate with tailscale
${tailscale}/bin/tailscale up \
--authkey "$(<${cfg.preAuthKeyFile})" \
--advertise-routes "${lib.concatStringsSep "," cfg.advertiseRoutes}" \
--advertise-exit-node=${if cfg.advertiseExitNode then "true" else "false"}
'';
};
};
}

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDGTCCAsCgAwIBAgIUMOkPfgLpbA08ovrPt+deXQPpA9kwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ0MDBaFw0zOTA0MTAyMTQ0MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABNweW8IgrXj7Q64RxyK8s9XpbxJ8TbYVv7NALbWUahlT
QPlGX/5XoM3Z5AtISBi1irLEy5o6mx7ebNK4NmwzNlCjggEkMIIBIDAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFMy3oz9l3bwpjgtx6IqL9IH90PXcMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAdBgNV
HREEFjAUghJibG9nLmhpbGxpb24uY28udWswPAYDVR0fBDUwMzAxoC+gLYYraHR0
cDovL2NybC5jbG91ZGZsYXJlLmNvbS9vcmlnaW5fZWNjX2NhLmNybDAKBggqhkjO
PQQDAgNHADBEAiAgVRgo5V09uyMbz1Mevmxe6d2K5xvZuBElVYja/Rf99AIgZkm1
wHEq9wqVYP0oWTiEYQZ6dzKoSwxviOEZI+ttQRA=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDHDCCAsGgAwIBAgIUMHdmb+Ef9YvVmCtliDhg1gDGt8cwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ1MDBaFw0zOTA0MTAyMTQ1MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABGn2vImTE+gpWx/0ELXue7cL0eGb+I2c9VbUYcy3TBJi
G7S+wl79MBM5+5G0wKhTpBgVpXu1/NHunfM97LGZb5ejggElMIIBITAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFI6dxFPItIKnNN7/xczMOtlTytuvMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAeBgNV
HREEFzAVghNnaXRlYS5oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0
dHA6Ly9jcmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZI
zj0EAwIDSQAwRgIhAKfRSEKCGNY5x4zUNzOy6vfxgDYPfkP6iW5Ha4gNmE+QAiEA
nTsGKr2EoqEdPtnB+wVrYMblWF7/or3JpRYGs6zD2FU=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDFDCCArugAwIBAgIUedwIJx096VH/KGDgpAKK/Q8jGWUwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTIzMDBaFw0zOTA0MTAyMTIzMDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABIdc0hnQQP7tLADaCGXxZ+1BGbZ8aow/TtHl+aXDbN3t
2vVV2iLmsMbiPcJZ5e9Q2M27L8fZ0uPJP19dDvvN97SjggEfMIIBGzAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFJilRKL8wXskL/LmgH8BnIvLIpkEMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAYBgNV
HREEETAPgg1oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9j
cmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZIzj0EAwID
RwAwRAIgbexSqkt3pzCpnpqYXwC5Gmt+nG5OEqETQ6690kpIS74CIFQI3zXlx8zk
GB0BlaZdrraAQP7AuI8CcMd5vbQdnldY
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDJDCCAsmgAwIBAgIUaSXrL4UHFHxDvvnW1720aZkkBCkwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTUzMDBaFw0zOTA0MTAyMTUzMDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABOz/ljJJjKawHtILlD09YMwmAdhzxTfPPi61qw7R670T
Oe4/KA4zClCKfzqnVEZ4YonfgK8U6VqhLPI4crxUQk+jggEtMIIBKTAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFO7S2TbvL1kel0QH+sYfjD6v2L7oMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAmBgNV
HREEHzAdghtob21lYXNzaXN0YW50LmhpbGxpb24uY28udWswPAYDVR0fBDUwMzAx
oC+gLYYraHR0cDovL2NybC5jbG91ZGZsYXJlLmNvbS9vcmlnaW5fZWNjX2NhLmNy
bDAKBggqhkjOPQQDAgNJADBGAiEAgaiFVCBLVYKjTJV67qKOg1R1GBVszNF+9PCi
ZejJcjwCIQDtl9S3zCl/h8/7uYfk8dHg0Y6kwd5GVuu6HE67GWJ2Yg==
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDGzCCAsGgAwIBAgIUFUDTvq6L7SR3qKxaNh77g3XkJk8wCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ2MDBaFw0zOTA0MTAyMTQ2MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABGpSYrOqMuzCfE6qdpXqFze8RxWDcDSUFRYmotnp4cyK
i6ISovoK7YDKarrHRIvIrsNBaqk+0hjZpOhN/XpU16SjggElMIIBITAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFLoqUdEVGspJs/SGcV7pf2bCzqTrMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAeBgNV
HREEFzAVghNsaW5rcy5oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0
dHA6Ly9jcmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZI
zj0EAwIDSAAwRQIhANh3Ds0ZSZp3rEZ46z4sBp+WNQejnDhTCXt2OIRiCrecAiAB
oe21Oz1Pmqv0htFxNf1YbkgJMCoGfENlViuR0cUAJg==
-----END CERTIFICATE-----

10
modules/www/default.nix Normal file
View File

@ -0,0 +1,10 @@
{ config, lib, ... }:
{
imports = [
./global.nix
./home.nix
./iot.nix
./www-repo.nix
];
}

View File

@ -10,19 +10,47 @@ in
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
age.secrets =
let
mkSecret = domain: {
name = "caddy/${domain}.pem";
value = {
file = ../../secrets/certs/${domain}.pem.age;
owner = config.services.caddy.user;
group = config.services.caddy.group;
};
};
in
builtins.listToAttrs (builtins.map mkSecret [
"hillion.co.uk"
"blog.hillion.co.uk"
"gitea.hillion.co.uk"
"homeassistant.hillion.co.uk"
"links.hillion.co.uk"
]);
custom.www.www-repo.enable = true; custom.www.www-repo.enable = true;
services.caddy = { services.caddy = {
enable = true; enable = true;
package = pkgs.unstable.caddy;
globalConfig = ''
email acme@hillion.co.uk
'';
virtualHosts = { virtualHosts = {
"hillion.co.uk".extraConfig = '' "hillion.co.uk".extraConfig = ''
tls ${./certs/hillion.co.uk.pem} ${config.age.secrets."caddy/hillion.co.uk.pem".path}
handle /.well-known/* { handle /.well-known/* {
header /.well-known/matrix/* Content-Type application/json header /.well-known/matrix/* Content-Type application/json
header /.well-known/matrix/* Access-Control-Allow-Origin * header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server "{\"m.server\": \"matrix.hillion.co.uk:443\"}" 200 respond /.well-known/matrix/server "{\"m.server\": \"matrix.hillion.co.uk:443\"}" 200
respond /.well-known/matrix/client `{"m.homeserver":{"base_url":"https://matrix.hillion.co.uk"}}` respond /.well-known/matrix/client `${builtins.toJSON {
"m.homeserver" = { "base_url" = "https://matrix.hillion.co.uk"; };
"org.matrix.msc3575.proxy" = { "url" = "https://matrix.hillion.co.uk"; };
}}` 200
respond 404 respond 404
} }
@ -32,20 +60,25 @@ in
} }
''; '';
"blog.hillion.co.uk".extraConfig = '' "blog.hillion.co.uk".extraConfig = ''
tls ${./certs/blog.hillion.co.uk.pem} ${config.age.secrets."caddy/blog.hillion.co.uk.pem".path}
root * /var/www/blog.hillion.co.uk root * /var/www/blog.hillion.co.uk
file_server file_server
''; '';
"homeassistant.hillion.co.uk".extraConfig = '' "homeassistant.hillion.co.uk".extraConfig = ''
tls ${./certs/homeassistant.hillion.co.uk.pem} ${config.age.secrets."caddy/homeassistant.hillion.co.uk.pem".path}
reverse_proxy http://${locations.services.homeassistant}:8123 reverse_proxy http://${locations.services.homeassistant}:8123
''; '';
"gitea.hillion.co.uk".extraConfig = '' "gitea.hillion.co.uk".extraConfig = ''
tls ${./certs/gitea.hillion.co.uk.pem} ${config.age.secrets."caddy/gitea.hillion.co.uk.pem".path}
reverse_proxy http://${locations.services.gitea}:3000 reverse_proxy http://${locations.services.gitea}:3000
''; '';
"matrix.hillion.co.uk".extraConfig = '' "matrix.hillion.co.uk".extraConfig = ''
reverse_proxy /_matrix/client/unstable/org.matrix.msc3575/sync http://${locations.services.matrix}:8009
reverse_proxy /_matrix/* http://${locations.services.matrix}:8008 reverse_proxy /_matrix/* http://${locations.services.matrix}:8008
reverse_proxy /_synapse/client/* http://${locations.services.matrix}:8008 reverse_proxy /_synapse/client/* http://${locations.services.matrix}:8008
''; '';
"links.hillion.co.uk".extraConfig = '' "links.hillion.co.uk".extraConfig = ''
tls ${./certs/links.hillion.co.uk.pem} ${config.age.secrets."caddy/links.hillion.co.uk.pem".path}
redir https://matrix.to/#/@jake:hillion.co.uk redir https://matrix.to/#/@jake:hillion.co.uk
''; '';
}; };

27
modules/www/home.nix Normal file
View File

@ -0,0 +1,27 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.www.home;
locations = config.custom.locations.locations;
in
{
options.custom.www.home = {
enable = lib.mkEnableOption "home";
};
config = lib.mkIf cfg.enable {
services.caddy = {
enable = true;
virtualHosts = {
"homeassistant.home.hillion.co.uk".extraConfig = ''
bind 10.64.50.25
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
reverse_proxy http://${locations.services.homeassistant}:8123
'';
};
};
};
}

Some files were not shown because too many files have changed in this diff Show More