Compare commits

...

179 Commits

Author SHA1 Message Date
d5c2f8d543 router: setup cameras vlan
All checks were successful
flake / flake (push) Successful in 1m15s
2024-09-17 09:20:27 +01:00
1189a41df9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m43s
2024-09-15 16:01:08 +00:00
39730d2ec3 macbook: add shell utilities
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-14 02:39:26 +01:00
ac6f285400 resilio: require mounts be available
All checks were successful
flake / flake (push) Successful in 1m15s
Without this resilio fails on boot on tywin.storage where the paths are
on a ZFS array which gets mounted reliably later than the resilio
service attempts to start.
2024-09-14 02:30:20 +01:00
e4b8fd7438 chore(deps): update determinatesystems/nix-installer-action action to v14
All checks were successful
flake / flake (push) Successful in 1m27s
2024-09-10 00:00:51 +00:00
24be3394bc chore(deps): update determinatesystems/magic-nix-cache-action action to v8
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-09 23:00:50 +00:00
ba053c539c boron: enable podman
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-06 19:04:25 +01:00
3aeeb69c2b nix-darwin: add macbook
All checks were successful
flake / flake (push) Successful in 1m13s
2024-09-05 00:50:02 +01:00
85246af424 caddy: update to unstable
All checks were successful
flake / flake (push) Successful in 1m13s
The default config for automatic ACME no longer works in Caddy <2.8.0.
This is due to changes with ZeroSSL's auth. Update to unstable Caddy
which is new enough to renew certs again.

Context: https://github.com/caddyserver/caddy/releases/tag/v2.8.0

Add `pkgs.unstable` as an overlay as recommended on the NixOS wiki. This
is needed here as Caddy must be runnable on all architectures.
2024-09-05 00:04:08 +01:00
ba7a39b66e chore(deps): pin dependencies
All checks were successful
flake / flake (push) Successful in 1m16s
2024-09-02 23:00:12 +00:00
df31ebebf8 boron: bump tmpfs to 100% of RAM
All checks were successful
flake / flake (push) Successful in 1m18s
2024-08-31 22:04:38 +01:00
2f3a33ad8e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 19:01:46 +00:00
343b34b4dc boron: support sched_ext in kernel
All checks were successful
flake / flake (push) Successful in 1m45s
2024-08-30 18:52:31 +01:00
264799952e bathroom_light: trust switchbot if more recently updated
All checks were successful
flake / flake (push) Successful in 1m13s
2024-08-30 18:46:38 +01:00
5cef32cf1e gitea actions: use cache for nix
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-30 18:39:02 +01:00
6cc70e117d tywin: mount d7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-22 15:17:11 +01:00
a52aed5778 gendry: use zram swap
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-18 13:51:28 +01:00
70b53b5c01 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-18 10:35:15 +00:00
3d642e2320 boron: move postgresqlBackup to disk to reduce ram pressure
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-09 23:37:16 +01:00
41d5f0cc53 homeassistant: add sonos
All checks were successful
flake / flake (push) Successful in 1m17s
2024-08-08 18:31:10 +01:00
974c947130 homeassistant: add smartthings
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 18:15:34 +01:00
8a9498f8d7 homeassistant: expose sleep_mode to google
All checks were successful
flake / flake (push) Successful in 1m15s
2024-08-04 17:56:32 +01:00
2ecdafe1cf chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m32s
2024-08-03 12:15:23 +01:00
db5dc5aee6 step-ca: enable server on sodium and load root certs
All checks were successful
flake / flake (push) Successful in 1m14s
2024-08-01 23:28:22 +01:00
f96f03ba0c boron: update to Linux 6.10
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-27 15:16:59 +01:00
e81cad1670 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m25s
2024-07-26 13:40:49 +00:00
67c8e3dcaf homeassistant: migrate to basnijholt/adaptive-lighting
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-22 11:16:34 +01:00
1052379119 unifi: switch to nixos module
All checks were successful
flake / flake (push) Successful in 1m24s
2024-07-19 16:43:53 +01:00
0edb8394c8 tywin: mount d6
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 22:19:41 +01:00
bbab551b0f be.lt: connect to Hillion WPA3 Network
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 17:10:08 +01:00
13c937b196 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 14:14:27 +00:00
6bdaca40e0 tmux: index from 0 and always allow attach
All checks were successful
flake / flake (push) Successful in 1m14s
2024-07-17 15:02:19 +01:00
462f0eecf4 gendry: allow luks discards
All checks were successful
flake / flake (push) Successful in 1m15s
2024-07-17 09:33:33 +01:00
5dcf3b8e3f chia: update to 2.4.1
All checks were successful
flake / flake (push) Successful in 1m13s
2024-07-10 10:01:16 +01:00
b0618cd3dc chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-07-06 22:31:08 +00:00
a9829eea9e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m30s
2024-06-23 10:18:56 +00:00
cfd64e9a73 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m29s
2024-06-16 12:13:03 +00:00
b3af1739a8 chore(deps): update actions/checkout action to v4.1.7
All checks were successful
flake / flake (push) Successful in 1m14s
2024-06-13 23:01:06 +00:00
cde6bdd498 tywin: enable clevis/tang for boot
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-10 22:34:28 +01:00
bd5efa3648 tywin: encrypt root disk
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-09 23:14:44 +01:00
30679f9f4b sodium: add cache directory on the sd card
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 22:41:49 +01:00
67644162e1 sodium: rekey
All checks were successful
flake / flake (push) Successful in 1m12s
accidentally ran `rm -r /data`...
2024-06-02 21:45:03 +01:00
81c77de5ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m13s
2024-06-02 13:15:18 +00:00
a0f93c73d0 sodium.pop: add rpi5 host
All checks were successful
flake / flake (push) Successful in 1m22s
2024-05-25 22:56:27 +01:00
78705d440a homeassistant: only switch bathroom light when it is already on
All checks were successful
flake / flake (push) Successful in 1m18s
Although the system now knows whether the bathroom light is on, it switches the switch every time the light should be turned off regardless of if it's already off. Because this is a device running on battery that performs a physical movement this runs the battery out very fast. Adjust the system to only switch the light off if it thinks it's on, even though this has the potential for desyncs.
2024-05-25 22:03:11 +01:00
3f829236a2 homeassistant: read bathroom light status from motion sensor
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-25 17:03:57 +01:00
7b221eda07 theon: stop scripting networking
All checks were successful
flake / flake (push) Successful in 1m19s
Unsure why this host is using systemd-networkd, but leave that unchanged
and have NixOS know about it to prevent a warning about loss of
connectivity on build.
2024-05-25 16:40:19 +01:00
22305815c6 matrix: fix warning about renamed sliding sync
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-25 16:33:05 +01:00
fa493123fc router: dhcp: add APC vendor specific cookie
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 22:14:16 +01:00
62e61bec8a matrix: add sliding sync
All checks were successful
flake / flake (push) Successful in 1m18s
2024-05-24 10:18:30 +01:00
50d70ed8bc boron: update kernel to 6.9
All checks were successful
flake / flake (push) Successful in 1m19s
2024-05-23 22:41:18 +01:00
796bbc7a68 chore(deps): update nixpkgs to nixos-24.05 (#271)
All checks were successful
flake / flake (push) Successful in 1m20s
This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [nixpkgs](https://github.com/NixOS/nixpkgs) | major | `nixos-23.11` -> `nixos-24.05` |

---

### Release Notes

<details>
<summary>NixOS/nixpkgs (nixpkgs)</summary>

### [`vnixos-24.05`](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

[Compare Source](https://github.com/NixOS/nixpkgs/compare/nixos-23.11...nixos-24.05)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNzQuMyIsInVwZGF0ZWRJblZlciI6IjM3LjM3NC4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Jake Hillion <jake@hillion.co.uk>
Reviewed-on: #271
Co-authored-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
Co-committed-by: Renovate Bot <renovate-bot@noreply.gitea.hillion.co.uk>
2024-05-23 22:40:58 +01:00
8123653a92 jorah.cx: delete
All checks were successful
flake / flake (push) Successful in 1m17s
2024-05-21 22:43:56 +01:00
55ade830a8 router: add more dhcp reservations
All checks were successful
flake / flake (push) Successful in 1m12s
2024-05-21 20:09:26 +01:00
a9c9600b14 matrix: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m20s
2024-05-18 19:14:39 +01:00
eae5e105ff unifi: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m21s
2024-05-18 16:52:22 +01:00
f1fd6ee270 gitea: fix ips in iptables rules
All checks were successful
flake / flake (push) Successful in 1m10s
2024-05-18 15:34:43 +01:00
1dc370709a chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m23s
2024-05-18 13:03:00 +00:00
de905e23a8 chore(deps): update actions/checkout action to v4.1.6
All checks were successful
flake / flake (push) Successful in 2m39s
2024-05-17 00:01:14 +00:00
9247ae5d91 chore(deps): update cachix/install-nix-action action to v27
All checks were successful
flake / flake (push) Successful in 2m33s
2024-05-16 23:01:13 +00:00
7298955391 tywin: enable automatic btrfs scrubbing
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-15 20:53:57 +01:00
f59824ad62 gitea: move jorah->boron
All checks were successful
flake / flake (push) Successful in 2m16s
2024-05-12 13:11:54 +01:00
bff93529aa www.global: move jorah->boron
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-12 12:11:15 +01:00
13bfe6f787 boron: enable authoritative dns
All checks were successful
flake / flake (push) Successful in 2m4s
2024-05-10 22:44:48 +01:00
ad8c8b9b19 boron: enable version_tracker
All checks were successful
flake / flake (push) Successful in 2m5s
2024-05-10 22:12:49 +01:00
b7c07d0107 boron: enable gitea actions
All checks were successful
flake / flake (push) Successful in 2m28s
2024-05-10 21:52:48 +01:00
9cc389f865 boron: remove folding@home
All checks were successful
flake / flake (push) Successful in 1m58s
The update from 23.11 to 24.05 brought a new folding@home version that
doesn't work. It doesn't work by default and there is 0 documentation on
writing xml configs manually, and even the web UI redirects to a
nonsense public website now. Unfortunately this means I'm going to let
this box idle rather than doing useful work, for now at least.
2024-05-10 21:02:16 +01:00
2153c22d7f chore(deps): update actions/checkout action to v4.1.5
All checks were successful
flake / flake (push) Successful in 2m1s
2024-05-09 23:00:45 +00:00
a4235b2581 boron: move to kernel 6.8 and re-image
All checks were successful
flake / flake (push) Successful in 1m58s
The extremely modern hardware on this server appears to experience
kernel crashes with the default NixOS 23.11 kernel 6.1 and the default
NixOS 24.05 kernel 6.6. Empirical testing shows the server staying up on
Ubuntu 22's 6.2 and explicit NixOS kernel 6.8.

The server was wiped during this testing so now needs reimaging.
2024-05-08 21:11:09 +01:00
36ce6ca185 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m23s
2024-05-08 16:39:43 +00:00
e3887e320e tywin: add zram swap
All checks were successful
flake / flake (push) Successful in 1m56s
2024-05-06 23:25:08 +01:00
a272cd0661 downloads: add explicit nameservers
All checks were successful
flake / flake (push) Successful in 1m48s
2024-05-06 00:07:25 +01:00
1ca4daab9c locations: move attrset into config block
All checks were successful
flake / flake (push) Successful in 1m42s
2024-04-28 10:39:40 +01:00
745ea58dec homeassistant: update trusted proxies
All checks were successful
flake / flake (push) Successful in 1m46s
2024-04-27 19:14:12 +01:00
348bca745b jorah: add authoritative dns server
All checks were successful
flake / flake (push) Successful in 1m44s
2024-04-27 18:54:46 +01:00
0ef24c14e7 tailscale: update to included nixos module
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-27 15:36:45 +01:00
d9233021c7 add enable options for modules/common/default
All checks were successful
flake / flake (push) Successful in 2m9s
2024-04-27 13:46:06 +01:00
b39549e1a9 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 2m11s
2024-04-27 11:00:24 +00:00
8fdd915e76 router.home: enable unbound dns server
All checks were successful
flake / flake (push) Successful in 2m0s
2024-04-26 21:40:17 +01:00
62d62500ae renovate: fix gitea actions schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m59s
Also pin patch version of Gitea actions. This should treat each future
update as an update rather than a relock which should show the changelog
in the PR.
2024-04-25 19:40:00 +01:00
b012d48e1d chore(deps): update actions/checkout digest to 0ad4b8f
All checks were successful
flake / flake (push) Successful in 1m55s
2024-04-25 15:01:12 +00:00
eba1dae06b chia: update to 2.2.1
All checks were successful
flake / flake (push) Successful in 1m59s
2024-04-24 23:36:51 +01:00
b6ef41cae0 renovate: auto-merge github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m49s
2024-04-23 22:24:28 +01:00
700ca88feb chore(deps): pin cachix/install-nix-action action to 8887e59
All checks were successful
flake / flake (push) Successful in 1m52s
2024-04-23 19:58:42 +00:00
1c75fa88a7 boron.cx: add new dedicated server
All checks were successful
flake / flake (push) Successful in 1m49s
2024-04-23 20:45:44 +01:00
c3447b3ec9 renovate: extend recommended config and pin github actions
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m37s
2024-04-23 20:27:40 +01:00
5350581676 gitea.actions: actually check formatting
All checks were successful
flake / flake (push) Successful in 1m38s
2024-04-23 20:23:54 +01:00
4d1521e4b4 be.lt: add beryllium laptop
All checks were successful
flake / flake (push) Successful in 1m43s
2024-04-21 17:15:14 +01:00
88b33598d7 microserver.parents -> li.pop
All checks were successful
flake / flake (push) Successful in 1m30s
2024-04-20 13:45:00 +01:00
4a09f50889 chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m35s
2024-04-19 17:02:52 +00:00
cf76a055e7 renovate: always rebase and update schedule
All checks were successful
renovate/config-validation Validation Successful
flake / flake (push) Successful in 1m30s
2024-04-17 23:22:29 +01:00
f4f6c66098 jorah: enable zramSwap
All checks were successful
flake / flake (push) Successful in 1m29s
2024-04-17 23:04:04 +01:00
2c432ce986 jorah: start folding@home
All checks were successful
flake / flake (push) Successful in 1m9s
2024-04-17 22:55:52 +01:00
d6b15a1f25 known_hosts: add jorah and theon
All checks were successful
flake / flake (push) Successful in 1m10s
2024-04-14 14:35:38 +01:00
bd34d0e3ad chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 1m34s
2024-04-14 11:02:06 +00:00
52caf6edf9 gitea.actions: nixify basic docker runner
All checks were successful
flake / flake (push) Successful in 1m37s
2024-04-14 00:09:28 +01:00
016d0e61b5 www: proxy some domains via cloudflare
All checks were successful
flake / flake (push) Successful in 3m38s
2024-04-13 23:02:22 +01:00
b4a33bb6b2 jorah: fix dual networking setup
All checks were successful
flake / flake (push) Successful in 3m35s
2024-04-13 16:45:20 +01:00
8cee990f54 cleanup references to server stranger
All checks were successful
flake / flake (push) Successful in 3m34s
2024-04-10 21:38:08 +01:00
59e5717e00 vm.strangervm: delete
All checks were successful
flake / flake (push) Successful in 3m39s
2024-04-07 21:07:02 +01:00
f2fe064f72 mastodon: stop running
All checks were successful
flake / flake (push) Successful in 4m57s
2024-04-07 19:08:35 +01:00
dd76435ec3 drone: remove server
All checks were successful
flake / flake (push) Successful in 5m9s
2024-04-07 19:00:12 +01:00
85e5c9d00e chore(deps): lock file maintenance
All checks were successful
flake / flake (push) Successful in 5m8s
2024-04-07 16:12:32 +00:00
ca1751533c update_script: escape quote
All checks were successful
flake / flake (push) Successful in 5m8s
2024-04-07 17:01:56 +01:00
8900423527 flake: deduplicate home-manager
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 16:37:43 +01:00
1ae3cf0c41 ci: ignore tags
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 16:28:34 +01:00
72898701da update_script: remove accidental subshell from echo
All checks were successful
flake / flake (push) Successful in 5m9s
2024-04-07 16:13:13 +01:00
3c693ee42f convert drone workflow to gitea actions
All checks were successful
flake / flake (push) Successful in 5m7s
2024-04-07 15:07:40 +01:00
9752e63f09 all: add easy update script
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-07 14:11:58 +01:00
d4fb381fcf microserver.parents: enable zram
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-05 11:09:48 +01:00
e812e96afc chore(deps): lock file maintenance
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-04 21:19:23 +00:00
d3fb88a328 gitea: update settings
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 20:28:46 +01:00
804aa10048 renovate configuration
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 20:22:10 +01:00
8f493d1335 move renovate.json to default place
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-04-01 20:03:52 +01:00
9674e6651d renovate: initial config
All checks were successful
continuous-integration/drone/push Build is passing
2024-04-01 19:34:53 +01:00
88378c3179 deluge: update config options
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-28 22:30:26 +00:00
d682d5434b add links. pointing to matrix
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-27 22:37:31 +00:00
790d0a8a6b homeassistant: add switchbot component
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-18 21:26:34 +00:00
78a024a924 add homeassistant
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 19:46:22 +00:00
5e725b14bb impermenance: fix zsh history file
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 15:00:34 +00:00
7f25cab5f8 www: cleanup emby
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 13:44:07 +00:00
b0e4e2cca1 flake: update 16th March 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-16 13:43:22 +00:00
80b4305e60 scripts: add update_nixpkgs
All checks were successful
continuous-integration/drone/push Build is passing
add simple script to update nix shell/run nixpkgs to match flake
nixpkgs-unstable input. this avoids having it set to the default of
nixpkgs-unstable and downloading 30MB tarballs all the time.
2024-03-01 14:44:25 +00:00
90cbec88db flake: update 28th February 2024
All checks were successful
continuous-integration/drone/push Build is passing
2024-02-28 13:40:55 +00:00
89dade473a theon: add host
All checks were successful
continuous-integration/drone/push Build is passing
2024-02-14 23:41:07 +00:00
d7398e38df flake: update to nixpkgs 2311
All checks were successful
continuous-integration/drone/push Build is passing
2024-02-10 15:34:54 +00:00
fc599096b4 chia: migrate to docker
All checks were successful
continuous-integration/drone/push Build is passing
Chia was pulled from the nixpkgs tree
(https://github.com/NixOS/nixpkgs/pull/270254) and the alternative
provided, `chia.nix`, still hasn't landed v2
(https://github.com/0xbbjubjub/chia.nix).

Switch to a more stable container release even if it's heavier than a
nixpkg. Hopefully at some point in the future the Nix build will
stabilise.

Latest docker package selected from https://github.com/Chia-Network/chia-docker/pkgs/container/chia - electing to do update this manually for determinism.
2024-02-08 23:33:46 +00:00
ec4f9f8af4 drone: stop running on PRs
All checks were successful
continuous-integration/drone/push Build is passing
Gitea settings for this repo were recently changed to require explicitly rebasing a PR if it isn't already based on main before merging. This makes the drone PR run redundant and it's really slow to run multiple in parallel on the current runner.
2024-02-07 23:51:53 +00:00
26908c8b77 router: switch to kea
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-02-07 23:23:51 +00:00
da8f4bb5a5 router: enable serial console on ttyS0
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-02-07 22:33:51 +00:00
a1e4578ee1 ssh: fix github known hosts
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
Using a pkgs.writeText causes an import at evaluation time instead of
just build time. This means that no host running `nix flake check` can
check all configurations if you have mixed architectures in a flake.

For some reason I've been getting away with this. This stopped when
switching to nixos-2311. Move the known hosts with a single key into the
NixOS config directly and put the GitHub keys in a real file. These
can't go into `.knownHosts` directly as it only supports one key per
host (sigh).

Reference: https://github.com/NixOS/nix/issues/4265
2024-02-06 22:39:49 +00:00
4c3b948beb remove darwin
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
I previously had one darwin host, `jakehillion-mbp-m1-13`. It never
worked right and I don't own the machine anymore. Clean up all darwin
references and add it from scratch when adding a machine in the future.
2024-02-06 22:14:58 +00:00
f176a9e4d5 impermanence: conditionally bind mount container storage
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-02-06 21:34:16 +00:00
d0dabc18f7 drone: update nix
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-02-04 11:55:43 +01:00
c54f4f8166 install sapling
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-02-04 11:08:20 +01:00
85843bbd55 tywin: remove storj
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
The returns of Storj have been diminishing and the database component is
really hard work on the drives. Disable it for now, and I'll delete the
storage. If this makes sense again in the future it will involve setting
up new nodes.
2024-01-19 22:58:12 +00:00
013de46aaa flake: update 16th January 2024
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2024-01-16 23:30:24 +00:00
2032b7693a unifi: update container to final revision
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
This is the final revision of
https://github.com/linuxserver/docker-unifi-controller

Future updates should switch to
https://github.com/linuxserver/docker-unifi-network-application

This is a pain and I'm not doing it now, it involves running mongodb
manually which is awful. Two options:
1. Switch to the new docker container.
2. Wait until NixOS natively supports a version later than 8.0.24 and
   switch to that.
2024-01-16 22:57:21 +00:00
104ea7f0cb tywin: temporarily remove d0
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
drive d0 is seriously rumbling. remove it so this server can boot with
it removed. kill the storj on it too.
2024-01-14 23:21:49 +00:00
bc5d370d0b add gitea
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2023-12-31 00:06:51 +00:00
8cdd3d6d6c flake: update 16th December 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-16 18:15:24 +00:00
5a6151306c add unifi
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-10 23:50:14 +00:00
785a17059d drone.server: modularise
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-08 23:42:55 +00:00
89374c44dc tmp: run unifi from vm.strangervm for colocation
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-08 22:21:27 +00:00
5d13643ee9 tywin: increase storj allocation to 1500GB
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-08 21:26:25 +00:00
126424ad12 storj: update to 1.94.1
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-12-08 17:24:13 +00:00
af1d0f8810 secrets: rekey restic/128G
Some checks reported errors
continuous-integration/drone/pr Build was killed
continuous-integration/drone/push Build was killed
2023-11-26 23:54:37 +00:00
82c98f4685 matrix: migrate vm.strangervm->jorah
Some checks reported errors
continuous-integration/drone/pr Build was killed
continuous-integration/drone/push Build was killed
2023-11-26 23:47:22 +00:00
2e27067660 vm: remove resilio sync
Some checks reported errors
continuous-integration/drone/pr Build was killed
continuous-integration/drone/push Build was killed
2023-11-26 12:32:52 +00:00
f047111de7 www/global: migrate vm.strangervm->jorah
Some checks reported errors
continuous-integration/drone/pr Build was killed
continuous-integration/drone/push Build was killed
2023-11-26 12:24:54 +00:00
6ee3e2f095 ep1: add static ips
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-11-21 20:09:02 +00:00
51a849b9c8 flake: update 18th November 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-11-18 14:55:16 +00:00
cd79b1e60a tywin: increase storj allocation to 1250GB
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-11-13 10:40:46 +00:00
34e042f68b flake: update 2nd November 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-11-02 21:38:30 +00:00
3a92fe8a7f tywin: mount /mnt/d6
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-11-02 20:55:41 +00:00
1945294218 jorah: auto scrub btrfs filesystems
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-10-24 22:09:05 +01:00
41b722860d matrix: add registration shared secret for cli tool
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-22 01:01:48 +01:00
6e748ec05f downloads: improve lo setup
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-10-21 22:58:33 +01:00
d3dc82a150 flake: update 13th October 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-10-13 18:50:54 +01:00
cd32b94c75 storj: update to 1.89.5
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-10-05 09:34:04 -06:00
496d816f12 tywin: add mnt/d4 and mnt/d5
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-23 13:33:05 +01:00
ceedaa852f tywin: enable chia on mnt/d3
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-16 16:11:18 +01:00
8bc57eb583 vm.strangervm: remove version_tracker
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-16 15:38:50 +01:00
7fc95b98d2 flake: update 16th September 2023
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2023-09-16 14:16:26 +01:00
754d770e53 chia: move database to ssd
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-14 23:36:03 +01:00
6a3a5cd416 jorah: move version_tracker
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-14 21:13:45 +01:00
74d687af40 jorah: enable impermanence
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-11 21:41:04 +01:00
ceab3b50a8 version_tracker: switch to main
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-10 14:19:45 +01:00
538706d3f7 jorah: add host
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-10 13:41:19 +01:00
05ae8bb0f2 flake: add flake-utils input
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-09 22:24:27 +01:00
1034d6a568 tywin: storj: enable on d3
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-09-08 21:38:57 +01:00
679e23eb62 flake: update 30th August 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-30 12:50:58 +01:00
ea9ab489e6 tywin: mnt d3
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-28 19:17:49 +01:00
241b8794e4 storj: update to 1.84.2
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-23 11:09:22 +01:00
e386051f31 flake: update 19th August 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-23 10:56:55 +01:00
15f0efc38a awesome: add plain terminal keybind
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-06 15:52:17 +01:00
9ee329b136 flake: update 5th August 2023
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-05 19:51:05 +01:00
571e8c5893 router: caddy: restart on failure
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2023-08-03 21:55:34 +01:00
e85d2699f3 router: enable netdata
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/pr Build is passing
2023-08-03 21:23:46 +01:00
133 changed files with 3130 additions and 1140 deletions

View File

@ -1,26 +0,0 @@
---
kind: pipeline
type: docker
name: check
steps:
- name: lint
image: nixos/nix:2.16.1
commands:
- nix --extra-experimental-features 'nix-command flakes' fmt
- git diff --exit-code
- name: check
image: nixos/nix:2.16.1
commands:
- nix --extra-experimental-features 'nix-command flakes' flake check
trigger:
event:
exclude:
- tag
---
kind: signature
hmac: 27c93405b251bb8bc80c82d7271702f80753ff63a0422678e62bbe2c4a025840
...

View File

@ -0,0 +1,23 @@
name: flake
on:
push:
branches:
- '**'
tags-ignore:
- '**'
jobs:
flake:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- uses: DeterminateSystems/nix-installer-action@da36cb69b1c3247ad7a1f931ebfd954a1105ef14 # v14
- uses: DeterminateSystems/magic-nix-cache-action@87b14cf437d03d37989d87f0fa5ce4f5dc1a330b # v8
- name: lint
run: |
nix fmt
git diff --exit-code
- name: flake check
run: nix flake check --all-systems
timeout-minutes: 10

View File

@ -10,3 +10,4 @@ Raspberry Pi images that support Tailscale and headless SSH can be built using a
nixos-generate -f sd-aarch64-installer --system aarch64-linux -c hosts/microserver.home.ts.hillion.co.uk/default.nix
cp SOME_OUTPUT out.img.zst
Alternatively, a Raspberry Pi image with headless SSH can be easily built using the logic in [this repo](https://github.com/Robertof/nixos-docker-sd-image-builder/tree/master).

View File

@ -0,0 +1,27 @@
{ config, pkgs, ... }:
{
config = {
system.stateVersion = 4;
networking.hostName = "jakehillion-mba-m2-15";
nix = {
useDaemon = true;
};
programs.zsh.enable = true;
security.pam.enableSudoTouchIdAuth = true;
environment.systemPackages = with pkgs; [
fd
htop
mosh
neovim
nix
ripgrep
sapling
];
};
}

View File

@ -2,18 +2,23 @@
"nodes": {
"agenix": {
"inputs": {
"darwin": "darwin",
"home-manager": "home-manager",
"darwin": [
"darwin"
],
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
]
],
"systems": "systems"
},
"locked": {
"lastModified": 1689334118,
"narHash": "sha256-djk5AZv1yU84xlKFaVHqFWvH73U7kIRstXwUAnDJPsk=",
"lastModified": 1723293904,
"narHash": "sha256-b+uqzj+Wa6xgMS9aNbX4I+sXeb5biPDi39VgvSFqFvU=",
"owner": "ryantm",
"repo": "agenix",
"rev": "0d8c5325fc81daf00532e3e26c6752f7bcde1143",
"rev": "f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41",
"type": "github"
},
"original": {
@ -25,95 +30,89 @@
"darwin": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1673295039,
"narHash": "sha256-AsdYgE8/GPwcelGgrntlijMg4t3hLFJFCRF3tL5WVjA=",
"lastModified": 1726188813,
"narHash": "sha256-Vop/VRi6uCiScg/Ic+YlwsdIrLabWUJc57dNczp0eBc=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "87b9d090ad39b25b2400029c64825fc2a8868943",
"rev": "21fe31f26473c180390cfa81e3ea81aca0204c80",
"type": "github"
},
"original": {
"owner": "lnl7",
"ref": "master",
"repo": "nix-darwin",
"type": "github"
}
},
"darwin_2": {
"flake-utils": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
"systems": "systems_2"
},
"locked": {
"lastModified": 1689825754,
"narHash": "sha256-u3W3WGO3BA63nb+CeNLBajbJ/sl8tDXBHKxxeTOCxfo=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "531c3de7eccf95155828e0cd9f18c25e7f937777",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"owner": "lnl7",
"ref": "master",
"repo": "nix-darwin",
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"home-manager": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1682203081,
"narHash": "sha256-kRL4ejWDhi0zph/FpebFYhzqlOBrk0Pl3dzGEKSAlEw=",
"lastModified": 1725703823,
"narHash": "sha256-tDgM4d8mLK0Hd6YMB2w1BqMto1XBXADOzPEaLl10VI4=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "32d3e39c491e2f91152c84f8ad8b003420eab0a1",
"rev": "208df2e558b73b6a1f0faec98493cb59a25f62ba",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-24.05",
"repo": "home-manager",
"type": "github"
}
},
"home-manager_2": {
"home-manager-unstable": {
"inputs": {
"nixpkgs": [
"nixpkgs"
"nixpkgs-unstable"
]
},
"locked": {
"lastModified": 1687871164,
"narHash": "sha256-bBFlPthuYX322xOlpJvkjUBz0C+MOBjZdDOOJJ+G2jU=",
"lastModified": 1726357542,
"narHash": "sha256-p4OrJL2weh0TRtaeu1fmNYP6+TOp/W2qdaIJxxQay4c=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "07c347bb50994691d7b0095f45ebd8838cf6bc38",
"rev": "e524c57b1fa55d6ca9d8354c6ce1e538d2a1f47f",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-23.05",
"repo": "home-manager",
"type": "github"
}
},
"impermanence": {
"locked": {
"lastModified": 1684264534,
"narHash": "sha256-K0zr+ry3FwIo3rN2U/VWAkCJSgBslBisvfRIPwMbuCQ=",
"lastModified": 1725690722,
"narHash": "sha256-4qWg9sNh5g1qPGO6d/GV2ktY+eDikkBTbWSg5/iD2nY=",
"owner": "nix-community",
"repo": "impermanence",
"rev": "89253fb1518063556edd5e54509c30ac3089d5e6",
"rev": "63f4d0443e32b0dd7189001ee1894066765d18a5",
"type": "github"
},
"original": {
@ -123,45 +122,44 @@
"type": "github"
}
},
"nixpkgs": {
"nixos-hardware": {
"locked": {
"lastModified": 1689956312,
"narHash": "sha256-NV9yamMhE5jgz+ZSM2IgXeYqOvmGIbIIJ+AFIhfD7Ek=",
"lastModified": 1725885300,
"narHash": "sha256-5RLEnou1/GJQl+Wd+Bxaj7QY7FFQ9wjnFq1VNEaxTmc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "6da4bc6cb07cba1b8e53d139cbf1d2fb8061d967",
"repo": "nixos-hardware",
"rev": "166dee4f88a7e3ba1b7a243edb1aca822f00680e",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-23.05",
"repo": "nixpkgs",
"repo": "nixos-hardware",
"type": "github"
}
},
"nixpkgs-chia": {
"nixpkgs": {
"locked": {
"lastModified": 1685960109,
"narHash": "sha256-uTuKV5ua048dIGdaC+lexSUK/9A/X4la4BEJXODZm9U=",
"owner": "lourkeur",
"lastModified": 1726320982,
"narHash": "sha256-RuVXUwcYwaUeks6h3OLrEmg14z9aFXdWppTWPMTwdQw=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "e2b683787475d344892bddea9ab413dc611b894e",
"rev": "8f7492cce28977fbf8bd12c72af08b1f6c7c3e49",
"type": "github"
},
"original": {
"owner": "lourkeur",
"owner": "nixos",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"rev": "e2b683787475d344892bddea9ab413dc611b894e",
"type": "github"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1689940971,
"narHash": "sha256-397xShPnFqPC59Bmpo3lS+/Aw0yoDRMACGo1+h2VJMo=",
"lastModified": 1726243404,
"narHash": "sha256-sjiGsMh+1cWXb53Tecsm4skyFNag33GPbVgCdfj3n9I=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "9ca785644d067445a4aa749902b29ccef61f7476",
"rev": "345c263f2f53a3710abe117f28a5cb86d0ba4059",
"type": "github"
},
"original": {
@ -174,13 +172,45 @@
"root": {
"inputs": {
"agenix": "agenix",
"darwin": "darwin_2",
"home-manager": "home-manager_2",
"darwin": "darwin",
"flake-utils": "flake-utils",
"home-manager": "home-manager",
"home-manager-unstable": "home-manager-unstable",
"impermanence": "impermanence",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs",
"nixpkgs-chia": "nixpkgs-chia",
"nixpkgs-unstable": "nixpkgs-unstable"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

130
flake.nix
View File

@ -1,83 +1,91 @@
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-23.05";
nixpkgs.url = "github:nixos/nixpkgs/nixos-24.05";
nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable";
nixpkgs-chia.url = "github:lourkeur/nixpkgs?rev=e2b683787475d344892bddea9ab413dc611b894e";
darwin.url = "github:lnl7/nix-darwin/master";
nixos-hardware.url = "github:nixos/nixos-hardware";
flake-utils.url = "github:numtide/flake-utils";
darwin.url = "github:lnl7/nix-darwin";
darwin.inputs.nixpkgs.follows = "nixpkgs";
agenix.url = "github:ryantm/agenix";
agenix.inputs.nixpkgs.follows = "nixpkgs";
agenix.inputs.darwin.follows = "darwin";
agenix.inputs.home-manager.follows = "home-manager";
home-manager.url = "github:nix-community/home-manager/release-23.05";
home-manager.url = "github:nix-community/home-manager/release-24.05";
home-manager.inputs.nixpkgs.follows = "nixpkgs";
home-manager-unstable.url = "github:nix-community/home-manager";
home-manager-unstable.inputs.nixpkgs.follows = "nixpkgs-unstable";
impermanence.url = "github:nix-community/impermanence/master";
};
description = "Hillion Nix flake";
outputs = { self, nixpkgs, nixpkgs-unstable, nixpkgs-chia, agenix, home-manager, impermanence, darwin, ... }@inputs: {
nixosConfigurations =
let
fqdns = builtins.attrNames (builtins.readDir ./hosts);
isNixos = fqdn: !builtins.pathExists ./hosts/${fqdn}/darwin;
getSystemOverlays = system: nixpkgsConfig: [
(final: prev: {
"storj" = final.callPackage ./pkgs/storj.nix { };
})
];
mkHost = fqdn:
let system = builtins.readFile ./hosts/${fqdn}/system;
in
nixpkgs.lib.nixosSystem {
inherit system;
specialArgs = inputs;
modules = [
./hosts/${fqdn}/default.nix
./modules/default.nix
outputs = { self, nixpkgs, nixpkgs-unstable, nixos-hardware, flake-utils, agenix, home-manager, home-manager-unstable, darwin, impermanence, ... }@inputs:
let
getSystemOverlays = system: nixpkgsConfig: [
(final: prev: {
unstable = nixpkgs-unstable.legacyPackages.${prev.system};
"storj" = final.callPackage ./pkgs/storj.nix { };
})
];
in
{
nixosConfigurations =
let
fqdns = builtins.attrNames (builtins.readDir ./hosts);
mkHost = fqdn:
let
system = builtins.readFile ./hosts/${fqdn}/system;
func = if builtins.pathExists ./hosts/${fqdn}/unstable then nixpkgs-unstable.lib.nixosSystem else nixpkgs.lib.nixosSystem;
home-manager-pick = if builtins.pathExists ./hosts/${fqdn}/unstable then home-manager-unstable else home-manager;
in
func {
inherit system;
specialArgs = inputs;
modules = [
./hosts/${fqdn}/default.nix
./modules/default.nix
agenix.nixosModules.default
impermanence.nixosModules.impermanence
agenix.nixosModules.default
impermanence.nixosModules.impermanence
home-manager.nixosModules.default
{
home-manager.sharedModules = [
impermanence.nixosModules.home-manager.impermanence
];
}
home-manager-pick.nixosModules.default
{
home-manager.sharedModules = [
impermanence.nixosModules.home-manager.impermanence
];
}
({ config, ... }: {
nix.registry.nixpkgs.flake = nixpkgs; # pin `nix shell` nixpkgs
system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev;
nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config;
})
];
};
in
nixpkgs.lib.genAttrs (builtins.filter isNixos fqdns) mkHost;
({ config, ... }: {
system.configurationRevision = nixpkgs.lib.mkIf (self ? rev) self.rev;
nixpkgs.overlays = getSystemOverlays config.nixpkgs.hostPlatform.system config.nixpkgs.config;
})
];
};
in
nixpkgs.lib.genAttrs fqdns mkHost;
darwinConfigurations =
let
hosts = builtins.attrNames (builtins.readDir ./hosts);
isDarwin = host: builtins.pathExists ./hosts/${host}/darwin;
mkHost = host:
let system = builtins.readFile ./hosts/${host}/system;
in
darwin.lib.darwinSystem {
inherit system;
inherit inputs;
modules = [
./hosts/${host}/default.nix
agenix.darwinModules.default
home-manager.darwinModules.default
];
};
in
nixpkgs.lib.genAttrs (builtins.filter isDarwin hosts) mkHost;
darwinConfigurations = {
jakehillion-mba-m2-15 = darwin.lib.darwinSystem {
system = "aarch64-darwin";
specialArgs = inputs;
formatter."x86_64-linux" = nixpkgs.legacyPackages."x86_64-linux".nixpkgs-fmt;
formatter."aarch64-darwin" = nixpkgs.legacyPackages."aarch64-darwin".nixpkgs-fmt;
};
modules = [
./darwin/jakehillion-mba-m2-15/configuration.nix
({ config, ... }: {
nixpkgs.overlays = getSystemOverlays "aarch64-darwin" config.nixpkgs.config;
})
];
};
};
} // flake-utils.lib.eachDefaultSystem (system: {
formatter = nixpkgs.legacyPackages.${system}.nixpkgs-fmt;
});
}

View File

@ -0,0 +1,55 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "be";
networking.domain = "lt.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Impermanence
custom.impermanence = {
enable = true;
userExtraFiles.jake = [
".ssh/id_ecdsa_sk_keys"
];
};
## WiFi
age.secrets."wifi/be.lt.ts.hillion.co.uk".file = ../../secrets/wifi/be.lt.ts.hillion.co.uk.age;
networking.wireless = {
enable = true;
environmentFile = config.age.secrets."wifi/be.lt.ts.hillion.co.uk".path;
networks = {
"Hillion WPA3 Network".psk = "@HILLION_WPA3_NETWORK_PSK@";
};
};
## Desktop
custom.users.jake.password = true;
custom.desktop.awesome.enable = true;
## Tailscale
age.secrets."tailscale/be.lt.ts.hillion.co.uk".file = ../../secrets/tailscale/be.lt.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/be.lt.ts.hillion.co.uk".path;
};
security.sudo.wheelNeedsPassword = lib.mkForce true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
};
}

View File

@ -0,0 +1,59 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "xhci_pci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/D184-A79B";
fsType = "vfat";
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/c8ffa91a-5152-4d84-8995-01232fd5acd6";
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/3fdc1b00-28d5-41dd-b8e0-fa6b1217f6eb";
fsType = "btrfs";
options = [ "subvol=data" ];
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp0s20f0u1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp1s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
powerManagement.cpuFreqGovernor = lib.mkDefault "powersave";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1,7 @@
# boron.cx.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/boron.cx.ts.hillion.co.uk/clevis_config.json)" >/mnt/data/disk_encryption.jwe
$ sudo chown root:root /mnt/data/disk_encryption.jwe
$ sudo chmod 0400 /mnt/data/disk_encryption.jwe

View File

@ -0,0 +1,13 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://80.229.251.26:7654"
},
{
"url": "http://185.240.111.53:7654"
}
]
}
}

View File

@ -0,0 +1,170 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "boron";
networking.domain = "cx.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [ "ip=dhcp" ];
boot.initrd = {
availableKernelModules = [ "igb" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices = {
"disk0-crypt".secretFile = "/data/disk_encryption.jwe";
"disk1-crypt".secretFile = "/data/disk_encryption.jwe";
};
};
};
custom.defaults = true;
## Kernel
### Explicitly use the latest kernel at time of writing because the LTS
### kernels available in NixOS do not seem to support this server's very
### modern hardware.
boot.kernelPackages = pkgs.linuxPackages_6_10;
### Apply patch to enable sched_ext which isn't yet available upstream.
boot.kernelPatches = [{
name = "sched_ext";
patch = pkgs.fetchpatch {
url = "https://github.com/sched-ext/scx-kernel-releases/releases/download/v6.10.3-scx1/linux-v6.10.3-scx1.patch.zst";
hash = "sha256-c4UlXsVOHGe0gvL69K9qTMWqCR8as25qwhfNVxCXUTs=";
decode = "${pkgs.zstd}/bin/unzstd";
excludes = [ "Makefile" ];
};
extraConfig = ''
BPF y
BPF_EVENTS y
BPF_JIT y
BPF_SYSCALL y
DEBUG_INFO_BTF y
FTRACE y
SCHED_CLASS_EXT y
'';
}];
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/system
btrfs subvolume snapshot /cache/empty_snapshot /cache/system
'';
## Custom Services
custom = {
locations.autoServe = true;
www.global.enable = true;
services = {
gitea.actions = {
enable = true;
tokenSecret = ../../secrets/gitea/actions/boron.age;
};
};
};
services.nsd.interfaces = [
"138.201.252.214"
"2a01:4f8:173:23d2::2"
];
## Enable ZRAM to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Filesystems
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# By default both /data and /nix would be scrubbed. They are the same filesystem so this is wasteful.
fileSystems = [ "/data" ];
};
## General usability
### Make podman available for dev tools such as act
virtualisation = {
containers.enable = true;
podman = {
enable = true;
dockerCompat = true;
dockerSocket.enable = true;
};
};
users.users.jake.extraGroups = [ "podman" ];
## Networking
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
"net.ipv6.conf.all.forwarding" = true;
};
networking = {
useDHCP = false;
interfaces = {
enp6s0 = {
name = "eth0";
useDHCP = true;
ipv6.addresses = [{
address = "2a01:4f8:173:23d2::2";
prefixLength = 64;
}];
};
};
defaultGateway6 = {
address = "fe80::1";
interface = "eth0";
};
};
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
22 # SSH
3022 # SSH (Gitea) - redirected to 22
53 # DNS
80 # HTTP 1-2
443 # HTTPS 1-2
8080 # Unifi (inform)
];
allowedUDPPorts = lib.mkForce [
53 # DNS
443 # HTTP 3
3478 # Unifi STUN
];
};
};
};
## Tailscale
age.secrets."tailscale/boron.cx.ts.hillion.co.uk".file = ../../secrets/tailscale/boron.cx.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/boron.cx.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,72 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" "size=100%" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/ED9C-4ABC";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=data" ];
};
fileSystems."/cache" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=cache" ];
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/9aebe351-156a-4aa0-9a97-f09b01ac23ad";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."disk0-crypt" = {
device = "/dev/disk/by-uuid/a68ead16-1bdc-4d26-9e55-62c2be11ceee";
allowDiscards = true;
};
boot.initrd.luks.devices."disk1-crypt" = {
device = "/dev/disk/by-uuid/19bde205-bee4-430d-a4c1-52d635a23963";
allowDiscards = true;
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enp6s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
}

View File

@ -0,0 +1 @@
x86_64-linux

View File

@ -2,8 +2,6 @@
{
imports = [
../../modules/common/default.nix
../../modules/spotify/default.nix
./bluetooth.nix
./hardware-configuration.nix
];
@ -17,6 +15,8 @@
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Impermanence
custom.impermanence = {
enable = true;
@ -29,7 +29,15 @@
];
};
## Enable ZRAM swap to help with root on tmpfs
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Desktop
custom.users.jake.password = true;
custom.desktop.awesome.enable = true;
## Resilio
@ -60,9 +68,9 @@
## Tailscale
age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".file = ../../secrets/tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path;
authKeyFile = config.age.secrets."tailscale/gendry.jakehillion-terminals.ts.hillion.co.uk".path;
};
security.sudo.wheelNeedsPassword = lib.mkForce true;
@ -75,24 +83,13 @@
boot.initrd.kernelModules = [ "amdgpu" ];
services.xserver.videoDrivers = [ "amdgpu" ];
## Spotify
home-manager.users.jake.services.spotifyd.settings = {
global = {
device_name = "Gendry";
device_type = "computer";
bitrate = 320;
};
};
## Password (for interactive logins)
age.secrets."passwords/gendry.jakehillion-terminals.ts.hillion.co.uk/jake".file = ../../secrets/passwords/gendry.jakehillion-terminals.ts.hillion.co.uk/jake.age;
users.users."${config.custom.user}" = {
passwordFile = config.age.secrets."passwords/gendry.jakehillion-terminals.ts.hillion.co.uk/jake".path;
packages = with pkgs; [
prismlauncher
];
};
## Networking
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
};
}

View File

@ -28,7 +28,10 @@
options = [ "subvol=nix" ];
};
boot.initrd.luks.devices."root".device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e";
boot.initrd.luks.devices."root" = {
device = "/dev/disk/by-uuid/af328e8d-d929-43f1-8d04-1c96b5147e5e";
allowDiscards = true;
};
fileSystems."/data" =
{

View File

@ -1,13 +0,0 @@
{ pkgs, config, agenix, ... }:
{
config.services.nix-daemon.enable = true;
config.environment.systemPackages = with pkgs; [
git
htop
mosh
nix
vim
];
}

View File

@ -1 +0,0 @@
aarch64-darwin

View File

@ -0,0 +1,50 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "li";
networking.domain = "pop.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services
custom.locations.autoServe = true;
# Networking
## Tailscale
age.secrets."tailscale/li.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/li.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/li.pop.ts.hillion.co.uk".path;
useRoutingFeatures = "server";
extraUpFlags = [ "--advertise-routes" "192.168.1.0/24" ];
};
## Enable ZRAM to make up for 2GB of RAM
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
networking.firewall.interfaces = {
"end0" = {
allowedTCPPorts = [
7654 # Tang
];
};
};
};
}

View File

@ -3,7 +3,6 @@
{
imports = [
./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix
];
@ -13,14 +12,23 @@
networking.hostName = "microserver";
networking.domain = "home.ts.hillion.co.uk";
custom.defaults = true;
## Custom Services
custom.locations.autoServe = true;
# Networking
## Tailscale
age.secrets."tailscale/microserver.home.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.home.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path;
advertiseRoutes = [ "10.64.50.0/24" "10.239.19.0/24" ];
advertiseExitNode = true;
authKeyFile = config.age.secrets."tailscale/microserver.home.ts.hillion.co.uk".path;
useRoutingFeatures = "server";
extraUpFlags = [
"--advertise-routes"
"10.64.50.0/24,10.239.19.0/24"
"--advertise-exit-node"
];
};
## Enable IoT VLAN
@ -31,6 +39,10 @@
};
};
hardware = {
bluetooth.enable = true;
};
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
@ -40,9 +52,19 @@
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
networking.firewall.interfaces."tailscale0".allowedTCPPorts = [
1883 # MQTT server
];
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall.interfaces = {
"eth0" = {
allowedUDPPorts = [
5353 # HomeKit
];
allowedTCPPorts = [
1400 # HA Sonos
7654 # Tang
21063 # HomeKit
];
};
};
};
}

View File

@ -1,35 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
../../modules/common/default.nix
../../modules/rpi/rpi4.nix
];
config = {
system.stateVersion = "22.05";
networking.hostName = "microserver";
networking.domain = "parents.ts.hillion.co.uk";
# Networking
## Tailscale
age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".file = ../../secrets/tailscale/microserver.parents.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/microserver.parents.ts.hillion.co.uk".path;
advertiseRoutes = [ "192.168.1.0/24" ];
};
## Enable IP forwarding for Tailscale
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = true;
};
## Run a persistent iperf3 server
services.iperf3.enable = true;
services.iperf3.openFirewall = true;
};
}

View File

@ -2,7 +2,6 @@
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
@ -19,6 +18,11 @@
"net.ipv4.conf.all.forwarding" = true;
};
custom.defaults = true;
## Interactive password
custom.users.jake.password = true;
## Impermanence
custom.impermanence.enable = true;
@ -28,6 +32,14 @@
nat.enable = lib.mkForce false;
useDHCP = false;
vlans = {
cameras = {
id = 3;
interface = "eth2";
};
};
interfaces = {
enp1s0 = {
name = "eth0";
@ -52,6 +64,14 @@
}
];
};
cameras /* cameras@eth2 */ = {
ipv4.addresses = [
{
address = "10.133.145.1";
prefixLength = 24;
}
];
};
enp4s0 = { name = "eth3"; };
enp5s0 = { name = "eth4"; };
enp6s0 = { name = "eth5"; };
@ -78,8 +98,8 @@
ip protocol icmp counter accept comment "accept all ICMP types"
iifname "eth0" ct state { established, related } counter accept
iifname "eth0" drop
iifname { "eth0", "cameras" } ct state { established, related } counter accept
iifname { "eth0", "cameras" } drop
}
chain forward {
@ -102,14 +122,8 @@
ip daddr 10.64.50.20 tcp dport 32400 counter accept comment "Plex"
ip daddr 10.64.50.20 tcp dport 8444 counter accept comment "Chia"
ip daddr 10.64.50.20 tcp dport 28967 counter accept comment "zfs.tywin.storj"
ip daddr 10.64.50.20 udp dport 28967 counter accept comment "zfs.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 udp dport 28968 counter accept comment "d0.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 udp dport 28969 counter accept comment "d1.tywin.storj"
ip daddr 10.64.50.20 tcp dport 28970 counter accept comment "d2.tywin.storj"
ip daddr 10.64.50.20 udp dport 28970 counter accept comment "d2.tywin.storj"
ip daddr 10.64.50.21 tcp dport 7654 counter accept comment "Tang"
}
}
@ -120,14 +134,8 @@
iifname eth0 tcp dport 32400 counter dnat to 10.64.50.20
iifname eth0 tcp dport 8444 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28967 counter dnat to 10.64.50.20
iifname eth0 udp dport 28967 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28968 counter dnat to 10.64.50.20
iifname eth0 udp dport 28968 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28969 counter dnat to 10.64.50.20
iifname eth0 udp dport 28969 counter dnat to 10.64.50.20
iifname eth0 tcp dport 28970 counter dnat to 10.64.50.20
iifname eth0 udp dport 28970 counter dnat to 10.64.50.20
iifname eth0 tcp dport 7654 counter dnat to 10.64.50.21
}
chain postrouting {
@ -140,54 +148,181 @@
};
services = {
dhcpd4 = {
kea = {
dhcp4 = {
enable = true;
settings = {
interfaces-config = {
interfaces = [ "eth1" "eth2" "cameras" ];
};
lease-database = {
type = "memfile";
persist = true;
name = "/var/lib/kea/dhcp4.leases";
};
option-def = [
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
type = "string";
array = false;
}
];
client-classes = [
{
name = "APC";
test = "option[vendor-class-identifier].text == 'APC'";
option-data = [
{
always-send = true;
name = "vendor-encapsulated-options";
}
{
name = "cookie";
space = "vendor-encapsulated-options-space";
code = 1;
data = "1APC";
}
];
}
];
subnet4 = [
{
subnet = "10.64.50.0/24";
interface = "eth1";
pools = [{
pool = "10.64.50.64 - 10.64.50.254";
}];
option-data = [
{
name = "routers";
data = "10.64.50.1";
}
{
name = "broadcast-address";
data = "10.64.50.255";
}
{
name = "domain-name-servers";
data = "10.64.50.1, 1.1.1.1, 8.8.8.8";
}
];
reservations = lib.lists.imap0
(i: el: {
ip-address = "10.64.50.${toString (20 + i)}";
inherit (el) hw-address hostname;
}) [
{ hostname = "tywin"; hw-address = "c8:7f:54:6d:e1:03"; }
{ hostname = "microserver"; hw-address = "e4:5f:01:b4:58:95"; }
{ hostname = "theon"; hw-address = "00:1e:06:49:06:1e"; }
{ hostname = "server-switch"; hw-address = "84:d8:1b:9d:0d:85"; }
{ hostname = "apc-ap7921"; hw-address = "00:c0:b7:6b:f4:34"; }
{ hostname = "sodium"; hw-address = "d8:3a:dd:c3:d6:2b"; }
];
}
{
subnet = "10.239.19.0/24";
interface = "eth2";
pools = [{
pool = "10.239.19.64 - 10.239.19.254";
}];
option-data = [
{
name = "routers";
data = "10.239.19.1";
}
{
name = "broadcast-address";
data = "10.239.19.255";
}
{
name = "domain-name-servers";
data = "10.239.19.1, 1.1.1.1, 8.8.8.8";
}
];
reservations = [
{
# bedroom-everything-presence-one
hw-address = "40:22:d8:e0:1d:50";
ip-address = "10.239.19.2";
hostname = "bedroom-everything-presence-one";
}
{
# living-room-everything-presence-one
hw-address = "40:22:d8:e0:0f:78";
ip-address = "10.239.19.3";
hostname = "living-room-everything-presence-one";
}
];
}
{
subnet = "10.133.145.0/24";
interface = "cameras";
pools = [{
pool = "10.133.145.64 - 10.133.145.254";
}];
option-data = [
{
name = "routers";
data = "10.133.145.1";
}
{
name = "broadcast-address";
data = "10.133.145.255";
}
{
name = "domain-name-servers";
data = "1.1.1.1, 8.8.8.8";
}
];
reservations = [
];
}
];
};
};
};
unbound = {
enable = true;
interfaces = [ "eth1" "eth2" ];
extraConfig = ''
subnet 10.64.50.0 netmask 255.255.255.0 {
interface eth1;
settings = {
server = {
interface = [
"127.0.0.1"
"10.64.50.1"
"10.239.19.1"
];
access-control = [
"10.64.50.0/24 allow"
"10.239.19.0/24 allow"
];
};
option broadcast-address 10.64.50.255;
option routers 10.64.50.1;
range 10.64.50.64 10.64.50.254;
option domain-name-servers 1.1.1.1, 8.8.8.8;
}
subnet 10.239.19.0 netmask 255.255.255.0 {
interface eth2;
option broadcast-address 10.239.19.255;
option routers 10.239.19.1;
range 10.239.19.64 10.239.19.254;
option domain-name-servers 1.1.1.1, 8.8.8.8;
}
'';
machines = [
{
# tywin.storage.ts.hillion.co.uk
ethernetAddress = "c8:7f:54:6d:e1:03";
ipAddress = "10.64.50.20";
hostName = "tywin";
}
{
# syncbox
ethernetAddress = "00:1e:06:49:06:1e";
ipAddress = "10.64.50.22";
hostName = "syncbox";
}
];
forward-zone = [
{
name = ".";
forward-tls-upstream = "yes";
forward-addr = [
"1.1.1.1#cloudflare-dns.com"
"1.0.0.1#cloudflare-dns.com"
"8.8.8.8#dns.google"
"8.8.4.4#dns.google"
];
}
];
};
};
};
## Tailscale
age.secrets."tailscale/router.home.ts.hillion.co.uk".file = ../../secrets/tailscale/router.home.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path;
ipv4Addr = "100.105.71.48";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730";
authKeyFile = config.age.secrets."tailscale/router.home.ts.hillion.co.uk".path;
};
## Enable btrfs compression
@ -199,5 +334,34 @@
## Zigbee2Mqtt
custom.services.zigbee2mqtt.enable = true;
## Netdata
services.netdata = {
enable = true;
config = {
web = {
"bind to" = "unix:/run/netdata/netdata.sock";
};
};
};
services.caddy = {
enable = true;
virtualHosts."http://graphs.router.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = "reverse_proxy unix///run/netdata/netdata.sock";
};
};
users.users.caddy.extraGroups = [ "netdata" ];
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
};
}

View File

@ -12,6 +12,7 @@
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.kernelParams = [ "console=ttyS0,115200n8" ];
boot.extraModulePackages = [ ];
fileSystems."/" =

View File

@ -0,0 +1,87 @@
{ config, pkgs, lib, nixos-hardware, ... }:
{
imports = [
"${nixos-hardware}/raspberry-pi/5/default.nix"
./hardware-configuration.nix
];
config = {
system.stateVersion = "24.05";
networking.hostName = "sodium";
networking.domain = "pop.ts.hillion.co.uk";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
custom.defaults = true;
## Enable btrfs compression
fileSystems."/data".options = [ "compress=zstd" ];
fileSystems."/nix".options = [ "compress=zstd" ];
## Impermanence
custom.impermanence = {
enable = true;
cache.enable = true;
};
boot.initrd.postDeviceCommands = lib.mkAfter ''
btrfs subvolume delete /cache/tmp
btrfs subvolume snapshot /cache/empty_snapshot /cache/tmp
chmod 1777 /cache/tmp
'';
## CA server
custom.ca.service.enable = true;
### nix only supports build-dir from 2.22. bind mount /tmp to something persistent instead.
fileSystems."/tmp" = {
device = "/cache/tmp";
options = [ "bind" ];
};
# nix = {
# settings = {
# build-dir = "/cache/tmp/";
# };
# };
## Custom Services
custom.locations.autoServe = true;
# Networking
networking = {
useDHCP = false;
interfaces = {
end0 = {
name = "eth0";
useDHCP = true;
};
};
};
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
eth0 = {
allowedTCPPorts = lib.mkForce [
7654 # Tang
];
allowedUDPPorts = lib.mkForce [
];
};
};
};
## Tailscale
age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".file = ../../secrets/tailscale/sodium.pop.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/sodium.pop.ts.hillion.co.uk".path;
};
};
}

View File

@ -0,0 +1,63 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "usbhid" "usb_storage" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "tmpfs";
fsType = "tmpfs";
options = [ "mode=0755" ];
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/417B-1063";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
fileSystems."/nix" =
{
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=data" ];
};
fileSystems."/cache" =
{
device = "/dev/disk/by-uuid/48ae82bd-4d7f-4be6-a9c9-4fcc29d4aac0";
fsType = "btrfs";
options = [ "subvol=cache" ];
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.enu1u4.useDHCP = lib.mkDefault true;
# networking.interfaces.wlan0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
}

View File

@ -0,0 +1 @@
aarch64-linux

View File

@ -0,0 +1,56 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
config = {
system.stateVersion = "23.11";
networking.hostName = "theon";
networking.domain = "storage.ts.hillion.co.uk";
boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true;
custom.defaults = true;
## Custom Services
custom = {
locations.autoServe = true;
};
## Networking
networking.useNetworkd = true;
systemd.network.enable = true;
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall = {
trustedInterfaces = [ "tailscale0" ];
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
end0 = {
allowedTCPPorts = lib.mkForce [ ];
allowedUDPPorts = lib.mkForce [ ];
};
};
};
## Tailscale
age.secrets."tailscale/theon.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/theon.storage.ts.hillion.co.uk.age;
services.tailscale = {
enable = true;
authKeyFile = config.age.secrets."tailscale/theon.storage.ts.hillion.co.uk".path;
};
## Packages
environment.systemPackages = with pkgs; [
scrub
smartmontools
];
};
}

View File

@ -6,26 +6,20 @@
{
imports =
[
(modulesPath + "/profiles/qemu-guest.nix")
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "virtio_pci" "virtio_scsi" "sd_mod" "sr_mod" ];
boot.initrd.availableKernelModules = [ "ahci" "usbhid" "usb_storage" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{
device = "/dev/disk/by-uuid/6d59bd4b-439d-4480-897c-4480ea6fbe56";
device = "/dev/disk/by-uuid/44444444-4444-4444-8888-888888888888";
fsType = "ext4";
};
fileSystems."/data" =
{
device = "/dev/disk/by-uuid/01a351b8-cf66-4a31-9804-0b4145e69153";
fsType = "btrfs";
};
swapDevices = [ ];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
@ -33,8 +27,7 @@
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.ens18.useDHCP = lib.mkDefault true;
# networking.interfaces.tailscale0.useDHCP = lib.mkDefault true;
# networking.interfaces.end0.useDHCP = lib.mkDefault true;
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
}

View File

@ -0,0 +1 @@
aarch64-linux

View File

@ -0,0 +1,7 @@
# tywin.storage.ts.hillion.co.uk
Additional installation step for Clevis/Tang:
$ echo -n $DISK_ENCRYPTION_PASSWORD | clevis encrypt sss "$(cat /etc/nixos/hosts/tywin.storage.ts.hillion.co.uk/clevis_config.json)" >/mnt/disk_encryption.jwe
$ sudo chown root:root /mnt/disk_encryption.jwe
$ sudo chmod 0400 /mnt/disk_encryption.jwe

View File

@ -0,0 +1,14 @@
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.64.50.21:7654"
},
{
"url": "http://10.64.50.25:7654"
}
]
}
}

View File

@ -2,7 +2,6 @@
{
imports = [
../../modules/common/default.nix
./hardware-configuration.nix
];
@ -16,15 +15,35 @@
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.kernelParams = [
"ip=dhcp"
"zfs.zfs_arc_max=25769803776"
];
boot.initrd = {
availableKernelModules = [ "r8169" ];
network.enable = true;
clevis = {
enable = true;
useTang = true;
devices."root".secretFile = "/disk_encryption.jwe";
};
};
custom.locations.autoServe = true;
custom.defaults = true;
# zram swap: used in the hope it will give the ZFS ARC more room to back off
zramSwap = {
enable = true;
memoryPercent = 200;
algorithm = "zstd";
};
## Tailscale
age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".file = ../../secrets/tailscale/tywin.storage.ts.hillion.co.uk.age;
custom.tailscale = {
services.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".path;
ipv4Addr = "100.115.31.91";
ipv6Addr = "fd7a:115c:a1e0:ab12:4843:cd96:6273:1f5b";
authKeyFile = config.age.secrets."tailscale/tywin.storage.ts.hillion.co.uk".path;
};
## Filesystems
@ -35,17 +54,18 @@
forceImportRoot = false;
extraPools = [ "data" ];
};
boot.kernelParams = [ "zfs.zfs_arc_max=25769803776" ];
services.zfs.autoScrub = {
services.btrfs.autoScrub = {
enable = true;
interval = "Tue, 02:00";
# All filesystems includes the BTRFS parts of all the hard drives. This
# would take forever and is redundant as they get fully read regularly.
fileSystems = [ "/" ];
};
services.zfs.autoScrub = {
enable = true;
interval = "Wed, 02:00";
};
services.sanoid.enable = true;
fileSystems."/mnt/d0".options = [ "x-systemd.mount-timeout=3m" ];
fileSystems."/mnt/d1".options = [ "x-systemd.mount-timeout=3m" ];
fileSystems."/mnt/d2".options = [ "x-systemd.mount-timeout=3m" ];
## Backups
### Git
@ -135,11 +155,21 @@
services.caddy = {
enable = true;
virtualHosts."http://restic.tywin.storage.ts.hillion.co.uk".extraConfig = ''
bind ${config.custom.tailscale.ipv4Addr} ${config.custom.tailscale.ipv6Addr}
bind ${config.custom.dns.tailscale.ipv4} ${config.custom.dns.tailscale.ipv6}
reverse_proxy http://localhost:8000
'';
};
systemd.services.caddy.requires = [ "tailscaled.service" ];
### HACK: Allow Caddy to restart if it fails. This happens because Tailscale
### is too late at starting. Upstream nixos caddy does restart on failure
### but it's prevented on exit code 1. Set the exit code to 0 (non-failure)
### to override this.
systemd.services.caddy = {
requires = [ "tailscaled.service" ];
after = [ "tailscaled.service" ];
serviceConfig = {
RestartPreventExitStatus = lib.mkForce 0;
};
};
services.restic.backups."prune-128G" = {
repository = "/data/backups/restic/128G";
@ -187,62 +217,9 @@
custom.chia = {
enable = true;
openFirewall = true;
path = "/data/chia";
keyFile = config.age.secrets."chia/farmer.key".path;
targetAddress = "xch1tl87mjd9zpugs7qy2ysc3j4qlftqlyjn037jywq6v2y4kp22g74qahn6sw";
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 3;
plotDirectories = builtins.genList (i: "/mnt/d${toString i}/plots/contract-k32") 8;
};
services.sanoid.datasets."data/chia" = {
autosnap = true;
autoprune = true;
hourly = 0;
daily = 7;
weekly = 12;
monthly = 6;
};
## Storj
age.secrets."storj/auth" = {
file = ../../secrets/storj/auth.age;
owner = "storj";
group = "storj";
};
custom.storj = {
enable = true;
openFirewall = true;
email = "jake+storj@hillion.co.uk";
wallet = "0x03cebe2608945D51f0bcE6c5ef70b4948fCEcfEe";
};
custom.storj.instances =
let
mkStorj = index: {
name = "d${toString index}";
value = {
configDir = "/mnt/d${toString index}/storj/config";
identityDir = "/mnt/d${toString index}/storj/identity";
authorizationTokenFile = config.age.secrets."storj/auth".path;
serverPort = 28967 + 1 + index;
externalAddress = "d${toString index}.tywin.storj.hillion.co.uk:${toString (28967 + 1 + index)}";
consoleAddress = "100.115.31.91:${toString (14002 + 1 + index)}";
storage = "1000GB";
};
};
instances = builtins.genList (x: x) 3;
in
builtins.listToAttrs (builtins.map mkStorj instances) // {
zfs = {
configDir = "/data/storj/config";
identityDir = "/data/storj/identity";
storage = "500GB";
consoleAddress = "100.115.31.91:14002";
serverPort = 28967;
externalAddress = "zfs.tywin.storj.hillion.co.uk:28967";
};
};
## Downloads
custom.services.downloads = {
@ -259,13 +236,10 @@
openFirewall = true;
};
## Firewall
## Networking
networking.nameservers = lib.mkForce [ ]; # Trust the DHCP nameservers
networking.firewall.interfaces."tailscale0".allowedTCPPorts = [
80 # Caddy (restic.tywin.storage.ts.)
14002 # Storj Dashboard (zfs.)
14003 # Storj Dashboard (d0.)
14004 # Storj Dashboard (d1.)
14005 # Storj Dashboard (d1.)
];
};
}

View File

@ -20,6 +20,11 @@
fsType = "btrfs";
};
boot.initrd.luks.devices."root" = {
device = "/dev/disk/by-uuid/32837730-5e15-4917-9939-cbb58bb0aabf";
allowDiscards = true;
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/BC57-0AF6";
@ -28,19 +33,49 @@
fileSystems."/mnt/d0" =
{
device = "/dev/disk/by-uuid/b424c997-4be6-42f3-965a-f5b3573a9cb3";
device = "/dev/disk/by-uuid/9136434d-d883-4118-bd01-903f720e5ce1";
fsType = "btrfs";
};
fileSystems."/mnt/d1" =
{
device = "/dev/disk/by-uuid/9136434d-d883-4118-bd01-903f720e5ce1";
device = "/dev/disk/by-uuid/a55d164e-b48e-4a4e-b073-d0768662d3d0";
fsType = "btrfs";
};
fileSystems."/mnt/d2" =
{
device = "/dev/disk/by-uuid/a55d164e-b48e-4a4e-b073-d0768662d3d0";
device = "/dev/disk/by-uuid/82b82c66-e6e6-4b76-a5ef-8adea33dbe18";
fsType = "btrfs";
};
fileSystems."/mnt/d3" =
{
device = "/dev/disk/by-uuid/6566588a-9399-4b35-a18c-060de0ee8431";
fsType = "btrfs";
};
fileSystems."/mnt/d4" =
{
device = "/dev/disk/by-uuid/850ce5db-4245-428a-a66d-2647abf62a4c";
fsType = "btrfs";
};
fileSystems."/mnt/d5" =
{
device = "/dev/disk/by-uuid/78bc5c57-d554-43c5-9a84-14e3dc52b1b3";
fsType = "btrfs";
};
fileSystems."/mnt/d6" =
{
device = "/dev/disk/by-uuid/b461e07d-39ab-46b4-b1d1-14c2e0791915";
fsType = "btrfs";
};
fileSystems."/mnt/d7" =
{
device = "/dev/disk/by-uuid/eb8d32d0-e506-449b-8dbc-585ba05c4252";
fsType = "btrfs";
};

View File

@ -1,88 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports = [
../../modules/common/default.nix
../../modules/drone/server.nix
./hardware-configuration.nix
];
config = {
system.stateVersion = "22.05";
networking.hostName = "vm";
networking.domain = "strangervm.ts.hillion.co.uk";
boot.loader.grub = {
enable = true;
device = "/dev/sda";
};
## Custom Services
custom = {
locations.autoServe = true;
www.global.enable = true;
services.matrix.enable = true;
services.version_tracker.enable = true;
};
## Networking
networking.interfaces.ens18.ipv4.addresses = [{
address = "10.72.164.3";
prefixLength = 24;
}];
networking.defaultGateway = "10.72.164.1";
networking.firewall = {
allowedTCPPorts = lib.mkForce [
22 # SSH
];
allowedUDPPorts = lib.mkForce [ ];
interfaces = {
ens18 = {
allowedTCPPorts = lib.mkForce [
80 # HTTP 1-2
443 # HTTPS 1-2
];
allowedUDPPorts = lib.mkForce [
443 # HTTP 3
];
};
};
};
## Tailscale
age.secrets."tailscale/vm.strangervm.ts.hillion.co.uk".file = ../../secrets/tailscale/vm.strangervm.ts.hillion.co.uk.age;
custom.tailscale = {
enable = true;
preAuthKeyFile = config.age.secrets."tailscale/vm.strangervm.ts.hillion.co.uk".path;
};
## Resilio Sync (Encrypted)
custom.resilio.enable = true;
services.resilio.deviceName = "vm.strangervm";
services.resilio.directoryRoot = "/data/sync";
services.resilio.storagePath = "/data/sync/.sync";
custom.resilio.folders =
let
folderNames = [
"dad"
"projects"
"resources"
"sync"
];
mkFolder = name: {
name = name;
secret = {
name = "resilio/encrypted/${name}";
file = ../../secrets/resilio/encrypted/${name}.age;
};
};
in
builtins.map (mkFolder) folderNames;
## Backups
services.postgresqlBackup.location = "/data/backup/postgres";
};
}

View File

@ -3,6 +3,7 @@
{
imports = [
./git.nix
./homeassistant.nix
./matrix.nix
];
}

View File

@ -0,0 +1,34 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.backups.homeassistant;
in
{
options.custom.backups.homeassistant = {
enable = lib.mkEnableOption "homeassistant";
};
config = lib.mkIf cfg.enable {
age.secrets."backups/homeassistant/restic/128G" = {
file = ../../secrets/restic/128G.age;
owner = "hass";
group = "hass";
};
services = {
restic.backups."homeassistant" = {
user = "hass";
timerConfig = {
OnCalendar = "03:00";
RandomizedDelaySec = "60m";
};
repository = "rest:http://restic.tywin.storage.ts.hillion.co.uk/128G";
passwordFile = config.age.secrets."backups/homeassistant/restic/128G".path;
paths = [
config.services.home-assistant.configDir
];
};
};
};
}

11
modules/ca/README.md Normal file
View File

@ -0,0 +1,11 @@
# ca
Getting the certificates in the right place is a manual process (for now, at least). This is to keep the most control over the root certificate's key and allow manual cycling. The manual commands should be run on a trusted machine.
Creating a 10 year root certificate:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME' cert.pem key.pem --kty=EC --curve=P-521 --profile=root-ca --not-after=87600h
Creating the intermediate key:
nix run nixpkgs#step-cli -- certificate create 'Hillion ACME (sodium.pop.ts.hillion.co.uk)' intermediate_cert.pem intermediate_key.pem --kty=EC --curve=P-521 --profile=intermediate-ca --not-after=8760h --ca=$NIXOS_ROOT/modules/ca/cert.pem --ca-key=DOWNLOADED_KEY.pem

13
modules/ca/cert.pem Normal file
View File

@ -0,0 +1,13 @@
-----BEGIN CERTIFICATE-----
MIIB+TCCAVqgAwIBAgIQIZdaIUsuJdjnu7DQP1N8oTAKBggqhkjOPQQDBDAXMRUw
EwYDVQQDEwxIaWxsaW9uIEFDTUUwHhcNMjQwODAxMjIyMjEwWhcNMzQwNzMwMjIy
MjEwWjAXMRUwEwYDVQQDEwxIaWxsaW9uIEFDTUUwgZswEAYHKoZIzj0CAQYFK4EE
ACMDgYYABAAJI3z1PrV97EFc1xaENcr6ML1z6xdXTy+ReHtf42nWsw+c3WDKzJ45
+xHJ/p2BTOR5+NQ7RGQQ68zmFJnEYTYDogAw6U9YzxxDGlG1HlgnZ9PPmXoF+PFl
Zy2WZCiDPx5KDJcjTPzLV3ITt4fl3PMA12BREVeonvrvRLcpVrMfS2b7wKNFMEMw
DgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQEwHQYDVR0OBBYEFFBT
fMT0uUbS+lVUbGKK8/SZHPISMAoGCCqGSM49BAMEA4GMADCBiAJCAPNIwrQztPrN
MaHB3J0lNVODIGwQWblt99vnjqIWOKJhgckBxaElyInsyt8dlnmTCpOCJdY4BA+K
Nr87AfwIWdAaAkIBV5i4zXPXVKblGKnmM0FomFSbq2cYE3pmi5BO1StakH1kEHlf
vbkdwFgkw2MlARp0Ka3zbWivBG9zjPoZtsL/8tk=
-----END CERTIFICATE-----

14
modules/ca/consumer.nix Normal file
View File

@ -0,0 +1,14 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.consumer;
in
{
options.custom.ca.consumer = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
security.pki.certificates = [ (builtins.readFile ./cert.pem) ];
};
}

8
modules/ca/default.nix Normal file
View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./consumer.nix
./service.nix
];
}

45
modules/ca/service.nix Normal file
View File

@ -0,0 +1,45 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.ca.service;
in
{
options.custom.ca.service = {
enable = lib.mkEnableOption "ca.service";
};
config = lib.mkIf cfg.enable {
services.step-ca = {
enable = true;
address = config.custom.dns.tailscale.ipv4;
port = 8443;
intermediatePasswordFile = "/data/system/ca/intermediate.psk";
settings = {
root = ./cert.pem;
crt = "/data/system/ca/intermediate.crt";
key = "/data/system/ca/intermediate.pem";
dnsNames = [ "ca.ts.hillion.co.uk" ];
logger = { format = "text"; };
db = {
type = "badgerv2";
dataSource = "/var/lib/step-ca/db";
};
authority = {
provisioners = [
{
type = "ACME";
name = "acme";
}
];
};
};
};
};
}

View File

@ -1,17 +1,12 @@
{ config, pkgs, lib, nixpkgs-chia, ... }:
{ config, pkgs, lib, ... }:
let
cfg = config.custom.chia;
chia = nixpkgs-chia.legacyPackages.x86_64-linux.chia;
ctl = pkgs.writeScriptBin "chiactl" ''
#! ${pkgs.runtimeShell}
sudo=exec
if [[ "$USER" != chia ]]; then
sudo='exec /run/wrappers/bin/sudo -u chia'
fi
$sudo ${chia}/bin/chia "$@"
set -e
sudo ${pkgs.podman}/bin/podman exec chia chia "$@"
'';
in
{
@ -26,14 +21,6 @@ in
type = with lib.types; nullOr str;
default = null;
};
keyLabel = lib.mkOption {
type = lib.types.str;
default = "default";
};
targetAddress = lib.mkOption {
type = with lib.types; nullOr str;
default = null;
};
plotDirectories = lib.mkOption {
type = with lib.types; nullOr (listOf str);
default = null;
@ -47,52 +34,31 @@ in
config = lib.mkIf cfg.enable {
environment.systemPackages = [ ctl ];
users.groups.chia = { };
users.groups.chia = {
gid = config.ids.gids.chia;
};
users.users.chia = {
home = cfg.path;
createHome = true;
isSystemUser = true;
group = "chia";
uid = config.ids.uids.chia;
};
systemd.services.chia = {
description = "Chia daemon.";
wantedBy = [ "multi-user.target" ];
preStart = lib.strings.concatStringsSep "\n" ([ "${chia}/bin/chia init" ]
++ (if cfg.keyFile == null then [ ] else [ "${chia}/bin/chia keys add -f ${cfg.keyFile} -l '${cfg.keyLabel}'" ])
++ (if cfg.targetAddress == null then [ ] else [
''
${pkgs.yq-go}/bin/yq e \
'.farmer.xch_target_address = "${cfg.targetAddress}" | .pool.xch_target_address = "${cfg.targetAddress}"' \
-i ${cfg.path}/.chia/mainnet/config/config.yaml
''
]) ++ (if cfg.plotDirectories == null then [ ] else [
''
${pkgs.yq-go}/bin/yq e \
'.harvester.plot_directories = [${lib.strings.concatMapStringsSep "," (x: "\"" + x + "\"") cfg.plotDirectories}]' \
-i ${cfg.path}/.chia/mainnet/config/config.yaml
''
]));
script = "${chia}/bin/chia start farmer";
preStop = "${chia}/bin/chia stop -d farmer";
serviceConfig = {
Type = "forking";
User = "chia";
Group = "chia";
WorkingDirectory = cfg.path;
Restart = "always";
RestartSec = 10;
TimeoutStopSec = 120;
OOMScoreAdjust = 1000;
Nice = 2;
IOSchedulingClass = "best-effort";
IOSchedulingPriority = 7;
virtualisation.oci-containers.containers.chia = {
image = "ghcr.io/chia-network/chia:2.4.1";
ports = [ "8444" ];
extraOptions = [
"--uidmap=0:${toString config.users.users.chia.uid}:1"
"--gidmap=0:${toString config.users.groups.chia.gid}:1"
];
volumes = [
"${cfg.keyFile}:/run/keyfile"
"${cfg.path}/.chia:/root/.chia"
] ++ lib.lists.imap0 (i: v: "${v}:/plots${toString i}") cfg.plotDirectories;
environment = {
keys = "/run/keyfile";
plots_dir = lib.strings.concatImapStringsSep ":" (i: v: "/plots${toString i}") cfg.plotDirectories;
};
};

View File

@ -1,6 +0,0 @@
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo

View File

@ -1,57 +0,0 @@
{ pkgs, lib, config, agenix, ... }:
{
imports = [
../home/default.nix
./shell.nix
./ssh.nix
];
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users."jake" = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
git
htop
nix
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
custom.hostinfo.enable = true;
}

View File

@ -1,36 +0,0 @@
# Global Internet hosts
server.stranger.proxmox.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9d5u/VaeRTQUQfu5JzCRa+zij/DtrPNWOfr+jM4iDp
ssh.gitea.hillion.co.uk ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==
# Tailscale hosts
alpha.proxmox.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ267QJXv82cee9pIly66hFGlNd9QPK4A6CNXatNnJRx
archnas.storage.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIISWIJMYD2I9+tdJCmtR3JlnymzfCN76uKbkHL3hzfDi
caddy.caddy.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKOqe2UPPs+xGJHjC2M3GTiL5wYlOjgu/H1C9cNGRi2
caddyhome.caddy.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICFo4MiQwjvd0d3J3T9uuIrdmfQw8IUpbtCc4C6qicvu
dancefloor.dancefloor.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY
gendry.jakehillion.terminals.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c
gitea.gitea.ts.hillion.co.uk ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCb73Wbp87HwLOVdEvlUv739e974rm9OPJ1NuB2et5D1h8ak7fSOgbhs7Kl8F7smkuiFFQUOfJEmroEbiiCj1So=
gitea.gitea.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPiJtFPP10yoi3Ij685hfck7r5rwUV4d7QIBjG5Jtih/
headscale.headscale.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOQNLDoIt1Rvu900sgnRncdDbMs5bCjvbZWu8+tk7Ega
homeassistant.homeassistant.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux
microserver.home.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw
microserver.home.ts.hillion.co.uk ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCzSf/Kmgfp3MQJzEeHNGhCDqg7YQRtJf14ydZb966XEkq0a0LGaiVSRh4gRXxJdKYD1Lr29Xw0EYdqFgOONPd8MLuH8rAlnr5HL5aiERyvYfrQXSudedpPRQ0wLL1HFAs3wmafbdv0EVYMLidm6YNNvF0pPGHVQ55A2FHcyGsix5OsK45SMr+yeIuHXKzp/0kzHb5LnnzlWO/0SHMuhw1V3Lb5zzPUzz3BgC1tz2cwsC88rz2Z/Mywfl4wRkYqxFf3KIYMT+Cn5SPE6jl6sAO4hTG/aHoIs/d/tGui5E2xOsF2kWy1oWB7Xfy0eYDX/TN5Y9iBOszGgQgW3bR+Mf379NnqVyZcN0KWM406c/LmbJXWKxfJQ19kF1xlfJHQ+SbsOg/28HUTOt09oj4+z8j6RFgKNKkOtj4qPc4nxTojJDPBa+qemxxSCHrmoZ1q9qKkuiY6bUzufuT/rgtZ17Mv8Yu9+jz5wX6AeLA/RJsRTnURrcfTcu9ShXlRg4CN+y6ArV8JdTv7ASaA2DcB4P2wfaHj3oWloU0CnSyzdy7OFkr8vZGgoqr81lbyctZvHbi7AJX9nCnTgBM/M0Z5qpI/L3aBA0Pq7oJGo44qOGM6tvTbnK0wck4VxlY1IpNNvH0FeS5RvfFOFFI0hhQtbdQHwpLYIOfs/EMvO0aNKDtkKw==
microserver.parents.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0cjjNQPnJwpu4wcYmvfjB1jlIfZwMxT+3nBusoYQFr
pbs.proxmox.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGY6ky2sQjg/bLRUWOUERmAOqboAjy+9PkE8sU+angx
plex.mediaserver.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM1sk3FOsuf4ZPrhGBYprQF/oVk7jITaAaVmBO6xwbdg
router.alpha.proxmox.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGL5Asl7OhF7R2a/YJNNv+fIE/VPw8ZCr+ABI7wlAdJI
router.home.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu
router.stranger.proxmox.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHq9tITN59FJfGoyOPNgP1QyJ0ohbVQS8OZtRO960Uxk
stranger.proxmox.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9d5u/VaeRTQUQfu5JzCRa+zij/DtrPNWOfr+jM4iDp
tywin.storage.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGATsjWO0qZNFp2BhfgDuWi+e/ScMkFxp79N2OZoed1k
unifi.unifi.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOeayV2pu0IpZS0OT17c4DqkILCZVRl1Y3s2fu087QkO
vm.strangervm.ts.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb9mgyD/G3Rt6lvO4c0hoaVOlLE8e3+DUfAoB1RI5cy
# Deprecated (Internal) hosts
containers.internal.hillion.co.uk ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIe1LLMeXRFDsmtt1dPhYm414oTcARJD7fGQXJwGXLPXJtCtoqFhVNq8+qYikdx+eNtiokI+Wz3xOi6ULt5gg2g=
containers.internal.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICyUD7/6/bYmjPy+Fd8hBQMSVvUcs0cnSi5ZtlUICiVD
containers.internal.hillion.co.uk ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgXYzClYOipN7H0ueoe0oUkGDvxAIlkO15/wIDGe13FHlHutDqvFr3CSfj521vXKGeh7udkL1zJ380dbvploJ4CtK3gp0sVG/5miodaH/elUBy3/hCl1dIdcaf0abDYitJGD9vPsOIKAML6b6z/uGTK9S4SQ86YZVriVt1tKNCrAjTnm9kd3nE1BXYZQiDNi/+/u05SStnvmF9uEnsADTYH6PKhQ2ms9SKRjTZRrQ4QE5LHcRHyxCI8oVEsBABx+7t0G9sQgbZZoU8qhJ4OEH5o82eQkjIqr+Qgef2SUpI6skv/Cv7nLVWAx+QFbIdi9hPEdtyz+v683/DOFTgm5/1p+tkOimPf9xJe9fZudlPDg5XtkBHoPXIT4LGlpCEWBBjrhRYcDwFkYs1o7Z8pNwxGhJhVZPIegs2HWnwlUA3gbLyBjTx6oa2WZYuPoIAjAZlaPcvqRHDU2zmHakX1cJrQNRd+AG6zbFJyg6bNJBCfheFyebZRjP9N5hU9VteNoM=
downloads.internal.hillion.co.uk ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL12K8fx9awUowFzw68AxrNzjyxKG00IVQKwQDdCIQ/yxUjL+86p+H3O99vkcGrLoWxDbXIIO0phRzfRf7//sv8=
downloads.internal.hillion.co.uk ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMaZqzMevw/+T0O6tAICn1iuu8+Uf8Inb39dlLwr0rGZ
downloads.internal.hillion.co.uk ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrQwi4uRCsoEaswrvNPSXpHM5CpzY4OXaRMApTtgHaMGSKnhC6QaDKP+P8nqcfYKLMKOlyOUkBUE28uftjLcs4oT/exfKuq0jm6PGxCdzZlQRDW0RemsRmBIY0sca0NS+Jwe6YxuC37wq7FRLkE3AH07FJxlfIqaA/xtq6s5JNYDPzKqsMww/sFu7fZJ3S8rh8ft+tf1oC8T4kM9AANIIgbvG+PIqOd0C3Az5cbsV6+Ejk3Afm/c5sBVjbiqAjmgsjXhObnmvreojBhJpcUAwYmRP7NJc/bfhWnb0Eo20xsOBZKt3RFTOpdDhp5KyTL+yUr0rcMMPH2Pbydk+hhdcD

View File

@ -1,25 +0,0 @@
{ pkgs, lib, config, ... }:
{
users.users."jake".openssh.authorizedKeys.keyFiles = [ ./authorized_keys ];
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHostsFiles = [
./known_hosts
(pkgs.writeText "github.keys" ''
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
'')
];
}

View File

@ -3,19 +3,21 @@
{
imports = [
./backups/default.nix
./ca/default.nix
./chia.nix
./common/hostinfo.nix
./defaults.nix
./desktop/awesome/default.nix
./dns.nix
./home/default.nix
./hostinfo.nix
./ids.nix
./impermanence.nix
./locations.nix
./resilio.nix
./services/downloads.nix
./services/mastodon/default.nix
./services/matrix.nix
./services/version_tracker.nix
./services/zigbee2mqtt.nix
./services/default.nix
./shell/default.nix
./ssh/default.nix
./storj.nix
./tailscale.nix
./users.nix
./www/global.nix
./www/www-repo.nix

64
modules/defaults.nix Normal file
View File

@ -0,0 +1,64 @@
{ pkgs, lib, config, agenix, ... }:
{
options.custom.defaults = lib.mkEnableOption "defaults";
config = lib.mkIf config.custom.defaults {
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
settings = {
auto-optimise-store = true;
};
gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 90d";
};
};
nixpkgs.config.allowUnfree = true;
time.timeZone = "Europe/London";
i18n.defaultLocale = "en_GB.UTF-8";
users = {
mutableUsers = false;
users.${config.custom.user} = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # enable sudo
uid = config.ids.uids.${config.custom.user};
};
};
security.sudo.wheelNeedsPassword = false;
environment = {
systemPackages = with pkgs; [
agenix.packages."${system}".default
gh
git
htop
nix
sapling
vim
];
variables.EDITOR = "vim";
shellAliases = {
ls = "ls -p --color=auto";
};
};
networking = rec {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
networkmanager.dns = "none";
};
networking.firewall.enable = true;
# Delegation
custom.ca.consumer.enable = true;
custom.dns.enable = true;
custom.home.defaults = true;
custom.hostinfo.enable = true;
custom.shell.enable = true;
custom.ssh.enable = true;
};
}

View File

@ -274,6 +274,8 @@ globalkeys = gears.table.join(
-- Standard program
awful.key({ modkey, }, "Return", function () awful.spawn(terminal .. " -e " .. tmux) end,
{description = "open a terminal with tmux", group = "launcher"}),
awful.key({ modkey, "Shift" }, "Return", function () awful.spawn(terminal) end,
{description = "open a terminal", group = "launcher"}),
awful.key({ modkey, "Control" }, "r", awesome.restart,
{description = "reload awesome", group = "awesome"}),

112
modules/dns.nix Normal file
View File

@ -0,0 +1,112 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.dns;
in
{
options.custom.dns = {
enable = lib.mkEnableOption "dns";
authoritative = {
ipv4 = lib.mkOption {
description = "authoritative ipv4 mappings";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "authoritative ipv6 mappings";
readOnly = true;
};
};
tailscale =
{
ipv4 = lib.mkOption {
description = "tailscale ipv4 address";
readOnly = true;
};
ipv6 = lib.mkOption {
description = "tailscale ipv6 address";
readOnly = true;
};
};
};
config = lib.mkIf cfg.enable {
custom.dns.authoritative = {
ipv4 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "100.113.188.46";
};
home = {
microserver = "100.105.131.47";
router = "100.105.71.48";
};
jakehillion-terminals = { gendry = "100.70.100.77"; };
lt = { be = "100.105.166.79"; };
pop = {
li = "100.106.87.35";
sodium = "100.87.188.4";
};
storage = {
theon = "100.104.142.22";
tywin = "100.115.31.91";
};
};
};
};
};
};
ipv6 = {
uk = {
co = {
hillion = {
ts = {
cx = {
boron = "fd7a:115c:a1e0::2a01:bc2f";
};
home = {
microserver = "fd7a:115c:a1e0:ab12:4843:cd96:6269:832f";
router = "fd7a:115c:a1e0:ab12:4843:cd96:6269:4730";
};
jakehillion-terminals = { gendry = "fd7a:115c:a1e0:ab12:4843:cd96:6246:644d"; };
lt = { be = "fd7a:115c:a1e0::9001:a64f"; };
pop = {
li = "fd7a:115c:a1e0::e701:5723";
sodium = "fd7a:115c:a1e0::3701:bc04";
};
storage = {
theon = "fd7a:115c:a1e0::4aa8:8e16";
tywin = "fd7a:115c:a1e0:ab12:4843:cd96:6273:1f5b";
};
};
};
};
};
};
};
custom.dns.tailscale =
let
lookupFqdn = lib.attrsets.attrByPath (lib.reverseList (lib.splitString "." config.networking.fqdn)) null;
in
{
ipv4 = lookupFqdn cfg.authoritative.ipv4;
ipv6 = lookupFqdn cfg.authoritative.ipv6;
};
networking.hosts =
let
mkHosts = hosts:
(lib.collect (x: (builtins.hasAttr "name" x && builtins.hasAttr "value" x))
(lib.mapAttrsRecursive
(path: value:
lib.nameValuePair value [ (lib.concatStringsSep "." (lib.reverseList path)) ])
hosts));
in
builtins.listToAttrs (mkHosts cfg.authoritative.ipv4 ++ mkHosts cfg.authoritative.ipv6);
};
}

View File

@ -1,25 +0,0 @@
{ config, pkgs, lib, ... }:
{
config.age.secrets."drone/gitea_client_secret".file = ../../secrets/drone/gitea_client_secret.age;
config.age.secrets."drone/rpc_secret".file = ../../secrets/drone/rpc_secret.age;
config.virtualisation.oci-containers.containers."drone" = {
image = "drone/drone:2.16.0";
volumes = [ "/data/drone:/data" ];
ports = [ "18733:80" ];
environment = {
DRONE_AGENTS_ENABLED = "true";
DRONE_GITEA_SERVER = "https://gitea.hillion.co.uk";
DRONE_GITEA_CLIENT_ID = "687ee331-ad9e-44fd-9e02-7f1c652754bb";
DRONE_SERVER_HOST = "drone.hillion.co.uk";
DRONE_SERVER_PROTO = "https";
DRONE_LOGS_DEBUG = "true";
DRONE_USER_CREATE = "username:JakeHillion,admin:true";
};
environmentFiles = [
config.age.secrets."drone/gitea_client_secret".path
config.age.secrets."drone/rpc_secret".path
];
};
}

View File

@ -6,7 +6,9 @@
./tmux/default.nix
];
config = {
options.custom.home.defaults = lib.mkEnableOption "home";
config = lib.mkIf config.custom.home.defaults {
home-manager = {
users.root.home = {
stateVersion = "22.11";
@ -22,5 +24,9 @@
file.".zshrc".text = "";
};
};
# Delegation
custom.home.git.enable = true;
custom.home.tmux.enable = true;
};
}

View File

@ -1,21 +1,30 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.git;
in
{
home-manager.users.jake.programs.git = {
enable = true;
extraConfig = {
user = {
email = "jake@hillion.co.uk";
name = "Jake Hillion";
};
pull = {
rebase = true;
};
merge = {
conflictstyle = "diff3";
};
init = {
defaultBranch = "main";
options.custom.home.git = {
enable = lib.mkEnableOption "git";
};
config = lib.mkIf cfg.enable {
home-manager.users.jake.programs.git = lib.mkIf (config.custom.user == "jake") {
enable = true;
extraConfig = {
user = {
email = "jake@hillion.co.uk";
name = "Jake Hillion";
};
pull = {
rebase = true;
};
merge = {
conflictstyle = "diff3";
};
init = {
defaultBranch = "main";
};
};
};
};

View File

@ -8,3 +8,11 @@ bind -n C-k clear-history
bind '"' split-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"
bind c new-window -c "#{pane_current_path}"
# Start indices at 1 to match keyboard
set -g base-index 1
setw -g pane-base-index 1
# Open a new session when attached to and one isn't open
# Must come after base-index settings
new-session

View File

@ -1,8 +1,17 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.home.tmux;
in
{
home-manager.users.jake.programs.tmux = {
enable = true;
extraConfig = lib.readFile ./.tmux.conf;
options.custom.home.tmux = {
enable = lib.mkEnableOption "tmux";
};
config = lib.mkIf cfg.enable {
home-manager.users.jake.programs.tmux = {
enable = true;
extraConfig = lib.readFile ./.tmux.conf;
};
};
}

View File

@ -17,7 +17,7 @@ in
script = "${pkgs.writers.writePerl "hostinfo" {
libraries = with pkgs; [
perl536Packages.HTTPDaemon
perlPackages.HTTPDaemon
];
} ''
use v5.10;

25
modules/ids.nix Normal file
View File

@ -0,0 +1,25 @@
{ config, pkgs, lib, ... }:
{
config = {
ids.uids = {
## Defined System Users (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183;
chia = 185;
gitea = 186;
## Consistent People
jake = 1000;
joseph = 1001;
};
ids.gids = {
## Defined System Groups (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
unifi = 183;
chia = 185;
gitea = 186;
## Consistent Groups
mediaaccess = 1200;
};
};
}

View File

@ -2,7 +2,6 @@
let
cfg = config.custom.impermanence;
listIf = (enable: x: if enable then x else [ ]);
in
{
options.custom.impermanence = {
@ -12,6 +11,13 @@ in
type = lib.types.str;
default = "/data";
};
cache = {
enable = lib.mkEnableOption "impermanence.cache";
path = lib.mkOption {
type = lib.types.str;
default = "/cache";
};
};
users = lib.mkOption {
type = with lib.types; listOf str;
@ -31,41 +37,69 @@ in
config = lib.mkIf cfg.enable {
fileSystems.${cfg.base}.neededForBoot = true;
services.openssh.hostKeys = [
{ path = "/data/system/etc/ssh/ssh_host_ed25519_key"; type = "ed25519"; }
{ path = "/data/system/etc/ssh/ssh_host_rsa_key"; type = "rsa"; bits = 4096; }
];
environment.persistence."${cfg.base}/system" = {
hideMounts = true;
directories = [
"/etc/nixos"
] ++ (listIf config.custom.tailscale.enable [ "/var/lib/tailscale" ]) ++
(listIf config.services.zigbee2mqtt.enable [ config.services.zigbee2mqtt.dataDir ]) ++
(listIf config.hardware.bluetooth.enable [ "/var/lib/bluetooth" ]);
services = {
openssh.hostKeys = [
{ path = "/data/system/etc/ssh/ssh_host_ed25519_key"; type = "ed25519"; }
{ path = "/data/system/etc/ssh/ssh_host_rsa_key"; type = "rsa"; bits = 4096; }
];
matrix-synapse.dataDir = "${cfg.base}/system/var/lib/matrix-synapse";
gitea.stateDir = "${cfg.base}/system/var/lib/gitea";
};
environment.persistence = lib.mkMerge [
{
"${cfg.base}/system" = {
hideMounts = true;
directories = [
"/etc/nixos"
] ++ (lib.lists.optional config.services.tailscale.enable "/var/lib/tailscale") ++
(lib.lists.optional config.services.zigbee2mqtt.enable config.services.zigbee2mqtt.dataDir) ++
(lib.lists.optional config.services.postgresql.enable config.services.postgresql.dataDir) ++
(lib.lists.optional config.hardware.bluetooth.enable "/var/lib/bluetooth") ++
(lib.lists.optional config.custom.services.unifi.enable "/var/lib/unifi") ++
(lib.lists.optional (config.virtualisation.oci-containers.containers != { }) "/var/lib/containers") ++
(lib.lists.optional config.services.tang.enable "/var/lib/private/tang") ++
(lib.lists.optional config.services.caddy.enable "/var/lib/caddy") ++
(lib.lists.optional config.services.step-ca.enable "/var/lib/step-ca/db");
};
}
(lib.mkIf cfg.cache.enable {
"${cfg.cache.path}/system" = {
hideMounts = true;
directories = (lib.lists.optional config.services.postgresqlBackup.enable config.services.postgresqlBackup.location);
};
})
];
home-manager.users =
let
mkUser = (x: {
name = x;
value = {
home.persistence."/data/users/${x}" = {
files = [
".zsh_history"
] ++ cfg.userExtraFiles.${x} or [ ];
home = {
persistence."/data/users/${x}" = {
allowOther = false;
directories = cfg.userExtraDirs.${x} or [ ];
files = cfg.userExtraFiles.${x} or [ ];
directories = cfg.userExtraDirs.${x} or [ ];
};
file.".zshrc".text = lib.mkForce ''
HISTFILE=/data/users/${x}/.zsh_history
'';
};
};
});
in
builtins.listToAttrs (builtins.map mkUser cfg.users);
systemd.tmpfiles.rules = builtins.map
systemd.tmpfiles.rules = lib.lists.flatten (builtins.map
(user:
let details = config.users.users.${user}; in "L ${details.home}/local - ${user} ${details.group} - /data/users/${user}")
cfg.users;
let details = config.users.users.${user}; in [
"d /data/users/${user} 0700 ${user} ${details.group} - -"
"L ${details.home}/local - ${user} ${details.group} - /data/users/${user}"
])
cfg.users);
};
}

View File

@ -11,19 +11,41 @@ in
};
locations = lib.mkOption {
default = {
services = {
downloads = "tywin.storage.ts.hillion.co.uk";
mastodon = "vm.strangervm.ts.hillion.co.uk";
matrix = "vm.strangervm.ts.hillion.co.uk";
};
};
readOnly = true;
};
};
config = lib.mkIf cfg.autoServe {
custom.services.downloads.enable = cfg.locations.services.downloads == config.networking.fqdn;
custom.services.mastodon.enable = cfg.locations.services.mastodon == config.networking.fqdn;
custom.services.matrix.enable = cfg.locations.services.matrix == config.networking.fqdn;
};
config = lib.mkMerge [
{
custom.locations.locations = {
services = {
authoritative_dns = [ "boron.cx.ts.hillion.co.uk" ];
downloads = "tywin.storage.ts.hillion.co.uk";
gitea = "boron.cx.ts.hillion.co.uk";
homeassistant = "microserver.home.ts.hillion.co.uk";
mastodon = "";
matrix = "boron.cx.ts.hillion.co.uk";
tang = [
"li.pop.ts.hillion.co.uk"
"microserver.home.ts.hillion.co.uk"
"sodium.pop.ts.hillion.co.uk"
];
unifi = "boron.cx.ts.hillion.co.uk";
version_tracker = [ "boron.cx.ts.hillion.co.uk" ];
};
};
}
(lib.mkIf cfg.autoServe
{
custom.services = lib.mapAttrsRecursive
(path: value: {
enable =
if builtins.isList value
then builtins.elem config.networking.fqdn value
else config.networking.fqdn == value;
})
cfg.locations.services;
})
];
}

View File

@ -1,12 +1,9 @@
{ pkgs, lib, config, nixpkgs-unstable, ... }:
{ pkgs, lib, config, ... }:
let
cfg = config.custom.resilio;
in
{
imports = [ "${nixpkgs-unstable}/nixos/modules/services/networking/resilio.nix" ];
disabledModules = [ "services/networking/resilio.nix" ];
options.custom.resilio = {
enable = lib.mkEnableOption "resilio";
@ -64,5 +61,7 @@ in
in
builtins.map (folder: mkFolder folder.name folder.secret) cfg.folders;
};
systemd.services.resilio.unitConfig.RequiresMountsFor = builtins.map (folder: "${config.services.resilio.directoryRoot}/${folder.name}") cfg.folders;
};
}

View File

@ -0,0 +1,50 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.services.authoritative_dns;
in
{
options.custom.services.authoritative_dns = {
enable = lib.mkEnableOption "authoritative_dns";
};
config = lib.mkIf cfg.enable {
services.nsd = {
enable = true;
zones = {
"ts.hillion.co.uk" = {
data =
let
makeRecords = type: s: (lib.concatStringsSep "\n" (lib.collect builtins.isString (lib.mapAttrsRecursive (path: value: "${lib.concatStringsSep "." (lib.reverseList path)} 86400 ${type} ${value}") s)));
in
''
$ORIGIN ts.hillion.co.uk.
$TTL 86400
ts.hillion.co.uk. IN SOA ns1.hillion.co.uk. hostmaster.hillion.co.uk. (
1 ;Serial
7200 ;Refresh
3600 ;Retry
1209600 ;Expire
3600 ;Negative response caching TTL
)
86400 NS ns1.hillion.co.uk.
ca 21600 CNAME sodium.pop.ts.hillion.co.uk.
deluge.downloads 21600 CNAME tywin.storage.ts.hillion.co.uk.
graphs.router.home 21600 CNAME router.home.ts.hillion.co.uk.
prowlarr.downloads 21600 CNAME tywin.storage.ts.hillion.co.uk.
radarr.downloads 21600 CNAME tywin.storage.ts.hillion.co.uk.
restic.tywin.storage 21600 CNAME tywin.storage.ts.hillion.co.uk.
sonarr.downloads 21600 CNAME tywin.storage.ts.hillion.co.uk.
zigbee2mqtt.home 21600 CNAME router.home.ts.hillion.co.uk.
'' + (makeRecords "A" config.custom.dns.authoritative.ipv4.uk.co.hillion.ts) + "\n\n" + (makeRecords "AAAA" config.custom.dns.authoritative.ipv6.uk.co.hillion.ts);
};
};
};
};
}

View File

@ -0,0 +1,16 @@
{ config, lib, ... }:
{
imports = [
./authoritative_dns.nix
./downloads.nix
./gitea/default.nix
./homeassistant.nix
./mastodon/default.nix
./matrix.nix
./tang.nix
./unifi.nix
./version_tracker.nix
./zigbee2mqtt.nix
];
}

View File

@ -29,10 +29,16 @@ in
virtualHosts = builtins.listToAttrs (builtins.map
(x: {
name = "http://${x}.downloads.ts.hillion.co.uk";
name = "${x}.downloads.ts.hillion.co.uk";
value = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
extraConfig = "reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock";
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = ''
reverse_proxy unix//${cfg.metadataPath}/caddy/caddy.sock
tls {
ca https://ca.ts.hillion.co.uk:8443/acme/acme/directory
}
'';
};
}) [ "prowlarr" "sonarr" "radarr" "deluge" ]);
};
@ -94,6 +100,8 @@ in
containers."downloads" = {
autoStart = true;
ephemeral = true;
additionalCapabilities = [ "CAP_NET_ADMIN" ];
extraFlags = [ "--network-namespace-path=/run/netns/downloads" ];
bindMounts = {
@ -123,13 +131,17 @@ in
systemd.services.setup-loopback = {
description = "Setup container loopback adapter.";
after = [ "network-pre.target" ];
before = [ "network.target" ];
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
script = with pkgs; "${iproute2}/bin/ip link set up lo";
};
networking.hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; };
networking = {
nameservers = [ "1.1.1.1" "8.8.8.8" ];
hosts = { "127.0.0.1" = builtins.map (x: "${x}.downloads.ts.hillion.co.uk") [ "prowlarr" "sonarr" "radarr" "deluge" ]; };
};
services = {
prowlarr.enable = true;
@ -146,6 +158,7 @@ in
deluge = {
enable = true;
web.enable = true;
group = "mediaaccess";
dataDir = "/var/lib/deluge";
authFile = "/run/agenix/deluge/auth";
@ -154,11 +167,18 @@ in
config = {
download_location = "/media/downloads";
max_connections_global = 1024;
max_upload_speed = 12500;
max_download_speed = 25000;
max_active_seeding = 192;
max_active_downloading = 64;
max_active_limit = 256;
dont_count_slow_torrents = true;
stop_seed_at_ratio = true;
stop_seed_ratio = 2;
share_ratio_limit = 2;
enabled_plugins = [ "Label" ];
};
};

View File

@ -0,0 +1,105 @@
{ config, lib, pkgs, ... }:
let
cfg = config.custom.services.gitea.actions;
in
{
options.custom.services.gitea.actions = {
enable = lib.mkEnableOption "gitea-actions";
labels = lib.mkOption {
type = with lib.types; listOf str;
default = [
"ubuntu-latest:docker://node:16-bullseye"
"ubuntu-20.04:docker://node:16-bullseye"
];
};
tokenSecret = lib.mkOption {
type = lib.types.path;
};
};
config = lib.mkIf cfg.enable {
age.secrets."gitea/actions/token".file = cfg.tokenSecret;
# Run gitea-actions in a container and firewall it such that it can only
# access the Internet (not private networks).
containers."gitea-actions" = {
autoStart = true;
ephemeral = true;
privateNetwork = true; # all traffic goes through ve-gitea-actions on the host
hostAddress = "10.108.27.1";
localAddress = "10.108.27.2";
extraFlags = [
# Extra system calls required to nest Docker, taken from https://wiki.archlinux.org/title/systemd-nspawn
"--system-call-filter=add_key"
"--system-call-filter=keyctl"
"--system-call-filter=bpf"
];
bindMounts = let tokenPath = config.age.secrets."gitea/actions/token".path; in {
"${tokenPath}".hostPath = tokenPath;
};
timeoutStartSec = "5min";
config = (hostConfig: ({ config, pkgs, ... }: {
config = let cfg = hostConfig.custom.services.gitea.actions; in {
system.stateVersion = "23.11";
virtualisation.docker.enable = true;
services.gitea-actions-runner.instances.container = {
enable = true;
url = "https://gitea.hillion.co.uk";
tokenFile = hostConfig.age.secrets."gitea/actions/token".path;
name = "${hostConfig.networking.hostName}";
labels = cfg.labels;
settings = {
runner = {
capacity = 3;
};
cache = {
enabled = true;
host = "10.108.27.2";
port = 41919;
};
};
};
# Drop any packets to private networks
networking = {
firewall.enable = lib.mkForce false;
nftables = {
enable = true;
ruleset = ''
table inet filter {
chain output {
type filter hook output priority 100; policy accept;
ct state { established, related } counter accept
ip daddr 10.0.0.0/8 drop
ip daddr 100.64.0.0/10 drop
ip daddr 172.16.0.0/12 drop
ip daddr 192.168.0.0/16 drop
}
}
'';
};
};
};
})) config;
};
networking.nat = {
enable = true;
externalInterface = "eth0";
internalIPs = [ "10.108.27.2" ];
};
};
}

View File

@ -0,0 +1,8 @@
{ ... }:
{
imports = [
./actions.nix
./gitea.nix
];
}

View File

@ -0,0 +1,113 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.gitea;
in
{
options.custom.services.gitea = {
enable = lib.mkEnableOption "gitea";
httpPort = lib.mkOption {
type = lib.types.port;
default = 3000;
};
sshPort = lib.mkOption {
type = lib.types.port;
default = 3022;
};
};
config = lib.mkIf cfg.enable {
age.secrets = {
"gitea/mailer_password" = {
file = ../../../secrets/gitea/mailer_password.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
};
"gitea/oauth_jwt_secret" = {
file = ../../../secrets/gitea/oauth_jwt_secret.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/oauth2_jwt_secret";
};
"gitea/lfs_jwt_secret" = {
file = ../../../secrets/gitea/lfs_jwt_secret.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/lfs_jwt_secret";
};
"gitea/security_secret_key" = {
file = ../../../secrets/gitea/security_secret_key.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/secret_key";
};
"gitea/security_internal_token" = {
file = ../../../secrets/gitea/security_internal_token.age;
owner = config.services.gitea.user;
group = config.services.gitea.group;
path = "${config.services.gitea.customDir}/conf/internal_token";
};
};
users.users.gitea.uid = config.ids.uids.gitea;
users.groups.gitea.gid = config.ids.gids.gitea;
services.gitea = {
enable = true;
package = pkgs.unstable.gitea;
mailerPasswordFile = config.age.secrets."gitea/mailer_password".path;
appName = "Hillion Gitea";
database = {
type = "sqlite3";
name = "gitea";
path = "${config.services.gitea.stateDir}/data/gitea.db";
};
lfs.enable = true;
settings = {
server = {
DOMAIN = "gitea.hillion.co.uk";
HTTP_PORT = cfg.httpPort;
ROOT_URL = "https://gitea.hillion.co.uk/";
OFFLINE_MODE = false;
START_SSH_SERVER = true;
SSH_LISTEN_PORT = cfg.sshPort;
BUILTIN_SSH_SERVER_USER = "git";
SSH_DOMAIN = "ssh.gitea.hillion.co.uk";
SSH_PORT = 22;
};
mailer = {
ENABLED = true;
SMTP_ADDR = "smtp.mailgun.org:587";
FROM = "gitea@mg.hillion.co.uk";
USER = "gitea@mg.hillion.co.uk";
};
security = {
INSTALL_LOCK = true;
};
service = {
REGISTER_EMAIL_CONFIRM = true;
ENABLE_NOTIFY_MAIL = true;
EMAIL_DOMAIN_ALLOWLIST = "hillion.co.uk,cam.ac.uk,cl.cam.ac.uk";
};
session = {
PROVIDER = "file";
};
};
};
networking.firewall.extraCommands = ''
# proxy all traffic on public interface to the gitea SSH server
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A PREROUTING -t nat -i eth0 -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
# proxy locally originating outgoing packets
iptables -A OUTPUT -d 138.201.252.214 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
ip6tables -A OUTPUT -d 2a01:4f8:173:23d2::2 -t nat -p tcp --dport 22 -j REDIRECT --to-port ${builtins.toString cfg.sshPort}
'';
};
}

View File

@ -0,0 +1,164 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.homeassistant;
in
{
options.custom.services.homeassistant = {
enable = lib.mkEnableOption "homeassistant";
backup = lib.mkOption {
default = true;
type = lib.types.bool;
};
};
config = lib.mkIf cfg.enable {
custom = {
backups.homeassistant.enable = cfg.backup;
};
age.secrets."homeassistant/secrets.yaml" = {
file = ../../secrets/homeassistant/secrets.yaml.age;
path = "${config.services.home-assistant.configDir}/secrets.yaml";
owner = "hass";
group = "hass";
};
services = {
postgresql = {
enable = true;
initialScript = pkgs.writeText "homeassistant-init.sql" ''
CREATE ROLE "hass" WITH LOGIN;
CREATE DATABASE "homeassistant" WITH OWNER "hass" ENCODING "utf8";
'';
};
home-assistant = {
enable = true;
extraPackages = python3Packages: with python3Packages; [
psycopg2 # postgresql support
];
extraComponents = [
"bluetooth"
"default_config"
"esphome"
"google_assistant"
"homekit"
"met"
"mobile_app"
"mqtt"
"otp"
"smartthings"
"sonos"
"sun"
"switchbot"
];
customComponents = with pkgs.home-assistant-custom-components; [
adaptive_lighting
];
config = {
default_config = { };
recorder = {
db_url = "postgresql://@/homeassistant";
};
http = {
use_x_forwarded_for = true;
trusted_proxies = with config.custom.dns.authoritative; [
ipv4.uk.co.hillion.ts.cx.boron
ipv6.uk.co.hillion.ts.cx.boron
];
};
google_assistant = {
project_id = "homeassistant-8de41";
service_account = {
client_email = "!secret google_assistant_service_account_client_email";
private_key = "!secret google_assistant_service_account_private_key";
};
report_state = true;
expose_by_default = true;
exposed_domains = [ "light" ];
entity_config = {
"input_boolean.sleep_mode" = { };
};
};
homekit = [{
filter = {
include_domains = [ "light" ];
};
}];
bluetooth = { };
adaptive_lighting = {
lights = [
"light.bedroom_lamp"
"light.bedroom_light"
"light.cubby_light"
"light.desk_lamp"
"light.hallway_light"
"light.living_room_lamp"
"light.living_room_light"
"light.wardrobe_light"
];
min_sunset_time = "21:00";
};
light = [
{
platform = "template";
lights = {
bathroom_light = {
unique_id = "87a4cbb5-e5a7-44fd-9f28-fec2d6a62538";
value_template = "{{ false if state_attr('script.bathroom_light_switch_if_on', 'last_triggered') > states.sensor.bathroom_motion_sensor_illuminance_lux.last_reported else states('sensor.bathroom_motion_sensor_illuminance_lux') | int > 500 }}";
turn_on = { service = "script.noop"; };
turn_off = { service = "script.bathroom_light_switch_if_on"; };
};
};
}
];
sensor = [
{
# Time/Date (for automations)
platform = "time_date";
display_options = [
"date"
"date_time_iso"
];
}
{
# Living Room Temperature
platform = "statistics";
name = "Living Room temperature (rolling average)";
entity_id = "sensor.living_room_environment_sensor_temperature";
state_characteristic = "average_linear";
unique_id = "e86198a8-88f4-4822-95cb-3ec7b2662395";
max_age = {
minutes = 5;
};
}
];
input_boolean = {
sleep_mode = {
name = "Set house to sleep mode";
icon = "mdi:sleep";
};
};
# UI managed expansions
automation = "!include automations.yaml";
script = "!include scripts.yaml";
scene = "!include scenes.yaml";
};
};
};
};
}

View File

@ -32,26 +32,72 @@ in
};
};
services.mastodon = {
enable = true;
localDomain = "social.hillion.co.uk";
users.users.caddy.extraGroups = [ "mastodon" ];
vapidPublicKeyFile = builtins.path { path = ./vapid_public_key; };
otpSecretFile = config.age.secrets."mastodon/otp_secret_file".path;
secretKeyBaseFile = config.age.secrets."mastodon/secret_key_base".path;
vapidPrivateKeyFile = config.age.secrets."mastodon/vapid_private_key".path;
services = {
mastodon = {
enable = true;
localDomain = "social.hillion.co.uk";
smtp = {
user = "mastodon@social.hillion.co.uk";
port = 587;
passwordFile = config.age.secrets."mastodon/mastodon_at_social.hillion.co.uk".path;
host = "smtp.eu.mailgun.org";
fromAddress = "mastodon@social.hillion.co.uk";
authenticate = true;
vapidPublicKeyFile = builtins.path { path = ./vapid_public_key; };
otpSecretFile = config.age.secrets."mastodon/otp_secret_file".path;
secretKeyBaseFile = config.age.secrets."mastodon/secret_key_base".path;
vapidPrivateKeyFile = config.age.secrets."mastodon/vapid_private_key".path;
smtp = {
user = "mastodon@social.hillion.co.uk";
port = 587;
passwordFile = config.age.secrets."mastodon/mastodon_at_social.hillion.co.uk".path;
host = "smtp.eu.mailgun.org";
fromAddress = "mastodon@social.hillion.co.uk";
authenticate = true;
};
extraConfig = {
EMAIL_DOMAIN_WHITELIST = "hillion.co.uk";
};
streamingProcesses = 9;
};
extraConfig = {
EMAIL_DOMAIN_WHITELIST = "hillion.co.uk";
caddy = {
enable = true;
virtualHosts."social.hillion.co.uk".extraConfig = ''
handle_path /system/* {
file_server * {
root /var/lib/mastodon/public-system
}
}
handle /api/v1/streaming/* {
reverse_proxy unix//run/mastodon-streaming/streaming.socket
}
route * {
file_server * {
root ${pkgs.mastodon}/public
pass_thru
}
reverse_proxy * unix//run/mastodon-web/web.socket
}
handle_errors {
root * ${pkgs.mastodon}/public
rewrite 500.html
file_server
}
encode gzip
header /* {
Strict-Transport-Security "max-age=31536000;"
}
header /emoji/* Cache-Control "public, max-age=31536000, immutable"
header /packs/* Cache-Control "public, max-age=31536000, immutable"
header /system/accounts/avatars/* Cache-Control "public, max-age=31536000, immutable"
header /system/media_attachments/files/* Cache-Control "public, max-age=31536000, immutable"
'';
};
};
};

View File

@ -35,6 +35,16 @@ in
owner = "matrix-synapse";
group = "matrix-synapse";
};
"matrix/matrix.hillion.co.uk/registration_shared_secret" = {
file = ../../secrets/matrix/matrix.hillion.co.uk/registration_shared_secret.age;
owner = "matrix-synapse";
group = "matrix-synapse";
};
"matrix/matrix.hillion.co.uk/syncv3_secret" = {
file = ../../secrets/matrix/matrix.hillion.co.uk/syncv3_secret.age;
};
};
services = {
@ -58,6 +68,8 @@ in
];
settings = {
registration_shared_secret_path = config.age.secrets."matrix/matrix.hillion.co.uk/registration_shared_secret".path;
server_name = "hillion.co.uk";
public_baseurl = "https://matrix.hillion.co.uk/";
listeners = [
@ -66,7 +78,11 @@ in
tls = false;
type = "http";
x_forwarded = true;
bind_addresses = [ "::1" ];
bind_addresses = [
"::1"
config.custom.dns.tailscale.ipv4
config.custom.dns.tailscale.ipv6
];
resources = [
{
names = [ "client" "federation" ];
@ -102,6 +118,15 @@ in
};
};
matrix-sliding-sync = {
enable = true;
environmentFile = config.age.secrets."matrix/matrix.hillion.co.uk/syncv3_secret".path;
settings = {
SYNCV3_SERVER = "https://matrix.hillion.co.uk";
SYNCV3_BINDADDR = "[::]:8009";
};
};
heisenbridge = lib.mkIf cfg.heisenbridge {
enable = true;
owner = "@jake:hillion.co.uk";
@ -109,10 +134,12 @@ in
};
};
systemd.services.heisenbridge = lib.mkIf cfg.heisenbridge {
serviceConfig = {
Restart = "on-failure";
RestartSec = 15;
systemd.services = {
heisenbridge = lib.mkIf cfg.heisenbridge {
serviceConfig = {
Restart = "on-failure";
RestartSec = 15;
};
};
};
};

20
modules/services/tang.nix Normal file
View File

@ -0,0 +1,20 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.tang;
in
{
options.custom.services.tang = {
enable = lib.mkEnableOption "tang";
};
config = lib.mkIf cfg.enable {
services.tang = {
enable = true;
ipAddressAllow = [
"138.201.252.214/32"
"10.64.50.20/32"
];
};
};
}

View File

@ -0,0 +1,41 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.services.unifi;
in
{
options.custom.services.unifi = {
enable = lib.mkEnableOption "unifi";
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/unifi";
readOnly = true; # NixOS module only supports this directory
};
};
config = lib.mkIf cfg.enable {
# Fix dynamically allocated user and group ids
users.users.unifi.uid = config.ids.uids.unifi;
users.groups.unifi.gid = config.ids.gids.unifi;
services.caddy = {
enable = true;
virtualHosts = {
"unifi.hillion.co.uk".extraConfig = ''
reverse_proxy https://localhost:8443 {
transport http {
tls_insecure_skip_verify
}
}
'';
};
};
services.unifi = {
enable = true;
unifiPackage = pkgs.unifi8;
};
};
}

View File

@ -35,12 +35,12 @@ in
fi
cd repo
${git}/bin/git fetch
${git}/bin/git switch --detach origin/main
code=0
for path in hosts/*
do
hostname=''${path##*/}
if test -f "hosts/$hostname/darwin"; then continue; fi
if rev=$(${curl}/bin/curl -s --connect-timeout 15 http://$hostname:30653/current/nixos/system/configurationRevision); then
echo "$hostname: $rev (current)"

View File

@ -23,7 +23,7 @@ in
enable = true;
virtualHosts."http://zigbee2mqtt.home.ts.hillion.co.uk" = {
listenAddresses = [ config.custom.tailscale.ipv4Addr config.custom.tailscale.ipv6Addr ];
listenAddresses = [ config.custom.dns.tailscale.ipv4 config.custom.dns.tailscale.ipv6 ];
extraConfig = "reverse_proxy http://127.0.0.1:15606";
};
};

View File

@ -1,7 +1,20 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.shell;
in
{
config = {
imports = [
./update_scripts.nix
];
options.custom.shell = {
enable = lib.mkEnableOption "shell";
};
config = lib.mkIf cfg.enable {
custom.shell.update_scripts.enable = true;
users.defaultUserShell = pkgs.zsh;
environment.systemPackages = with pkgs; [ direnv ];

View File

@ -0,0 +1,64 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.shell.update_scripts;
update = pkgs.writeScriptBin "update" ''
#! ${pkgs.runtimeShell}
set -e
if [[ $EUID -ne 0 ]]; then
exec sudo ${pkgs.runtimeShell} "$0" "$@"
fi
if [ -n "$1" ]; then
BRANCH=$1
else
BRANCH=main
fi
cd /etc/nixos
if [ "$BRANCH" = "main" ]; then
${pkgs.git}/bin/git switch $BRANCH
${pkgs.git}/bin/git pull
else
${pkgs.git}/bin/git fetch
${pkgs.git}/bin/git switch --detach origin/$BRANCH
fi
if ! ${pkgs.nixos-rebuild}/bin/nixos-rebuild --flake "/etc/nixos#${config.networking.fqdn}" test; then
echo "WARNING: \`nixos-rebuild test' failed!"
fi
while true; do
read -p "Do you want to boot this configuration? " yn
case $yn in
[Yy]* ) break;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
${pkgs.nixos-rebuild}/bin/nixos-rebuild --flake "/etc/nixos#${config.networking.fqdn}" boot
while true; do
read -p "Would you like to reboot now? " yn
case $yn in
[Yy]* ) reboot;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
'';
in
{
options.custom.shell.update_scripts = {
enable = lib.mkEnableOption "update_scripts";
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [
update
];
};
}

View File

@ -1,25 +0,0 @@
{ config, pkgs, lib, ... }:
{
config.age.secrets."spotify/11132032266" = {
file = ../../secrets/spotify/11132032266.age;
owner = "jake";
};
config.hardware.pulseaudio.enable = true;
config.users.users.jake.extraGroups = [ "audio" ];
config.users.users.jake.packages = with pkgs; [ spotify-tui ];
config.home-manager.users.jake.services.spotifyd = {
enable = true;
settings = {
global = {
username = "11132032266";
password_cmd = "cat ${config.age.secrets."spotify/11132032266".path}";
backend = "pulseaudio";
};
};
};
}

55
modules/ssh/default.nix Normal file
View File

@ -0,0 +1,55 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.ssh;
in
{
options.custom.ssh = {
enable = lib.mkEnableOption "ssh";
};
config = lib.mkIf cfg.enable {
users.users =
if config.custom.user == "jake" then {
"jake".openssh.authorizedKeys.keys = [
"sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBBwJH4udKNvi9TjOBgkxpBBy7hzWqmP0lT5zE9neusCpQLIiDhr6KXYMPXWXdZDc18wH1OLi2+639dXOvp8V/wgAAAAEc3NoOg== jake@beryllium-keys"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOt74U+rL+BMtAEjfu/Optg1D7Ly7U+TupRxd5u9kfN7oJnW4dJA25WRSr4dgQNq7MiMveoduBY/ky2s0c9gvIA= jake@jake-gentoo"
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0uKIvvvkzrOcS7AcamsQRFId+bqPwUC9IiUIsiH5oWX1ReiITOuEo+TL9YMII5RyyfJFeu2ZP9moNuZYlE7Bs= jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAyFsYYjLZ/wyw8XUbcmkk6OKt2IqLOnWpRE5gEvm3X0V4IeTOL9F4IL79h7FTsPvi2t9zGBL1hxeTMZHSGfrdWaMJkQp94gA1W30MKXvJ47nEVt0HUIOufGqgTTaAn4BHxlFUBUuS7UxaA4igFpFVoPJed7ZMhMqxg+RWUmBAkcgTWDMgzUx44TiNpzkYlG8cYuqcIzpV2dhGn79qsfUzBMpGJgkxjkGdDEHRk66JXgD/EtVasZvqp5/KLNnOpisKjR88UJKJ6/buV7FLVra4/0hA9JtH9e1ecCfxMPbOeluaxlieEuSXV2oJMbQoPP87+/QriNdi/6QuCHkMDEhyGw== jake@jake-mbp"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCw4lgH20nfuchDqvVf0YciqN0GnBw5hfh8KIun5z0P7wlNgVYnCyvPvdIlGf2Nt1z5EGfsMzMLhKDOZkcTMlhupd+j2Er/ZB764uVBGe1n3CoPeasmbIlnamZ12EusYDvQGm2hVJTGQPPp9nKaRxr6ljvTMTNl0KWlWvKP4kec74d28MGgULOPLT3HlAyvUymSULK4lSxFK0l97IVXLa8YwuL5TNFGHUmjoSsi/Q7/CKaqvNh+ib1BYHzHYsuEzaaApnCnfjDBNexHm/AfbI7s+g3XZDcZOORZn6r44dOBNFfwvppsWj3CszwJQYIFeJFuMRtzlC8+kyYxci0+FXHn jake@jake-gentoo"
];
} else { };
programs.mosh.enable = true;
services.openssh = {
enable = true;
openFirewall = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
programs.ssh.knownHosts = {
# Global Internet hosts
"ssh.gitea.hillion.co.uk".publicKey = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxQpywsy+WGeaEkEL67xOBL1NIE++pcojxro5xAPO6VQe2N79388NRFMLlX6HtnebkIpVrvnqdLOs0BPMAokjaWCC4Ay7T/3ko1kXSOlqHY5Ye9jtjRK+wPHMZgzf74a3jlvxjrXJMA70rPQ3X+8UGpA04eB3JyyLTLuVvc6znMe53QiZ0x+hSz+4pYshnCO2UazJ148vV3htN6wRK+uqjNdjjQXkNJ7llNBSrvmfrLidlf0LRphEk43maSQCBcLEZgf4pxXBA7rFuZABZTz1twbnxP2ziyBaSOs7rcII+jVhF2cqJlElutBfIgRNJ3DjNiTcdhNaZzkwJ59huR0LUFQlHI+SALvPzE9ZXWVOX/SqQG+oIB8VebR52icii0aJH7jatkogwNk0121xmhpvvR7gwbJ9YjYRTpKs4lew3bq/W/OM8GF/FEuCsCuNIXRXKqIjJVAtIpuuhxPymFHeqJH3wK3f6jTJfcAz/z33Rwpow2VOdDyqrRfAW8ti73CCnRlN+VJi0V/zvYGs9CHldY3YvMr7rSd0+fdGyJHSTSRBF0vcyRVA/SqSfcIo/5o0ssYoBnQCg6gOkc3nNQ0C0/qh1ww17rw4hqBRxFJ2t3aBUMK+UHPxrELLVmG6ZUmfg9uVkOoafjRsoML6DVDB4JAk5JsmcZhybOarI9PJfEQ==";
# Tailscale hosts
"boron.cx.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtcJ7HY/vjtheMV8EN2wlTw1hU53CJebGIeRJcSkzt5";
"be.lt.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILV3OSUT+cqFqrFHZGfn7/xi5FW3n1qjUFy8zBbYs2Sm";
"dancefloor.dancefloor.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEXkGueVYKr2wp/VHo2QLis0kmKtc/Upg3pGoHr6RkzY";
"gendry.jakehillion.terminals.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXM5aDvNv4MTITXAvJWSS2yvr/mbxJE31tgwJtcl38c";
"homeassistant.homeassistant.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM2ytacl/zYXhgvosvhudsl0zW5eQRHXm9aMqG9adux";
"li.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQWgcDFL9UZBDKHPiEGepT1Qsc4gz3Pee0/XVHJ6V6u";
"microserver.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPPOCPqXm5a+vGB6PsJFvjKNgjLhM5MxrwCy6iHGRjXw";
"router.home.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlCj/i2xprN6h0Ik2tthOJQy6Qwq3Ony73+yfbHYTFu";
"sodium.pop.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQmG7v/XrinPmkTU2eIoISuU3+hoV4h60Bmbwd+xDjr";
"theon.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN59psLVu3/sQORA4x3p8H3ei8MCQlcwX5T+k3kBeBMf";
"tywin.storage.ts.hillion.co.uk".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGATsjWO0qZNFp2BhfgDuWi+e/ScMkFxp79N2OZoed1k";
};
programs.ssh.knownHostsFiles = [ ./github_known_hosts ];
};
}

View File

@ -0,0 +1,3 @@
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl"
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg="
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ=="

View File

@ -1,65 +0,0 @@
{ pkgs, lib, config, ... }:
let
cfg = config.custom.tailscale;
in
{
options.custom.tailscale = {
enable = lib.mkEnableOption "tailscale";
preAuthKeyFile = lib.mkOption {
type = lib.types.str;
};
advertiseRoutes = lib.mkOption {
type = with lib.types; listOf str;
default = [ ];
};
advertiseExitNode = lib.mkOption {
type = lib.types.bool;
default = false;
};
ipv4Addr = lib.mkOption { type = lib.types.str; };
ipv6Addr = lib.mkOption { type = lib.types.str; };
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.tailscale ];
services.tailscale.enable = true;
networking.firewall.checkReversePath = lib.mkIf cfg.advertiseExitNode "loose";
systemd.services.tailscale-autoconnect = {
description = "Automatic connection to Tailscale";
# make sure tailscale is running before trying to connect to tailscale
after = [ "network-pre.target" "tailscale.service" ];
wants = [ "network-pre.target" "tailscale.service" ];
wantedBy = [ "multi-user.target" ];
# set this service as a oneshot job
serviceConfig.Type = "oneshot";
# have the job run this shell script
script = with pkgs; ''
# wait for tailscaled to settle
sleep 2
# check if we are already authenticated to tailscale
status="$(${tailscale}/bin/tailscale status -json | ${jq}/bin/jq -r .BackendState)"
if [ $status = "Running" ]; then # if so, then do nothing
exit 0
fi
# otherwise authenticate with tailscale
${tailscale}/bin/tailscale up \
--authkey "$(<${cfg.preAuthKeyFile})" \
--advertise-routes "${lib.concatStringsSep "," cfg.advertiseRoutes}" \
--advertise-exit-node=${if cfg.advertiseExitNode then "true" else "false"}
'';
};
};
}

View File

@ -1,19 +1,21 @@
{ config, pkgs, lib, ... }:
let
cfg = config.custom.users;
in
{
config = {
ids.uids = {
## Defined System Users (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
## Consistent People
jake = 1000;
joseph = 1001;
};
ids.gids = {
## Defined System Groups (see https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/misc/ids.nix)
## Consistent Groups
mediaaccess = 1200;
options.custom.users = {
jake = {
password = lib.mkOption {
description = "Enable an interactive password.";
type = lib.types.bool;
default = false;
};
};
};
config = lib.mkIf cfg.jake.password {
age.secrets."passwords/jake".file = ../secrets/passwords/jake.age;
users.users.jake.hashedPasswordFile = config.age.secrets."passwords/jake".path;
};
}

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDGTCCAsCgAwIBAgIUMOkPfgLpbA08ovrPt+deXQPpA9kwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ0MDBaFw0zOTA0MTAyMTQ0MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABNweW8IgrXj7Q64RxyK8s9XpbxJ8TbYVv7NALbWUahlT
QPlGX/5XoM3Z5AtISBi1irLEy5o6mx7ebNK4NmwzNlCjggEkMIIBIDAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFMy3oz9l3bwpjgtx6IqL9IH90PXcMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAdBgNV
HREEFjAUghJibG9nLmhpbGxpb24uY28udWswPAYDVR0fBDUwMzAxoC+gLYYraHR0
cDovL2NybC5jbG91ZGZsYXJlLmNvbS9vcmlnaW5fZWNjX2NhLmNybDAKBggqhkjO
PQQDAgNHADBEAiAgVRgo5V09uyMbz1Mevmxe6d2K5xvZuBElVYja/Rf99AIgZkm1
wHEq9wqVYP0oWTiEYQZ6dzKoSwxviOEZI+ttQRA=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDHDCCAsGgAwIBAgIUMHdmb+Ef9YvVmCtliDhg1gDGt8cwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ1MDBaFw0zOTA0MTAyMTQ1MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABGn2vImTE+gpWx/0ELXue7cL0eGb+I2c9VbUYcy3TBJi
G7S+wl79MBM5+5G0wKhTpBgVpXu1/NHunfM97LGZb5ejggElMIIBITAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFI6dxFPItIKnNN7/xczMOtlTytuvMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAeBgNV
HREEFzAVghNnaXRlYS5oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0
dHA6Ly9jcmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZI
zj0EAwIDSQAwRgIhAKfRSEKCGNY5x4zUNzOy6vfxgDYPfkP6iW5Ha4gNmE+QAiEA
nTsGKr2EoqEdPtnB+wVrYMblWF7/or3JpRYGs6zD2FU=
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDFDCCArugAwIBAgIUedwIJx096VH/KGDgpAKK/Q8jGWUwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTIzMDBaFw0zOTA0MTAyMTIzMDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABIdc0hnQQP7tLADaCGXxZ+1BGbZ8aow/TtHl+aXDbN3t
2vVV2iLmsMbiPcJZ5e9Q2M27L8fZ0uPJP19dDvvN97SjggEfMIIBGzAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFJilRKL8wXskL/LmgH8BnIvLIpkEMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAYBgNV
HREEETAPgg1oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9j
cmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZIzj0EAwID
RwAwRAIgbexSqkt3pzCpnpqYXwC5Gmt+nG5OEqETQ6690kpIS74CIFQI3zXlx8zk
GB0BlaZdrraAQP7AuI8CcMd5vbQdnldY
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDJDCCAsmgAwIBAgIUaSXrL4UHFHxDvvnW1720aZkkBCkwCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTUzMDBaFw0zOTA0MTAyMTUzMDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABOz/ljJJjKawHtILlD09YMwmAdhzxTfPPi61qw7R670T
Oe4/KA4zClCKfzqnVEZ4YonfgK8U6VqhLPI4crxUQk+jggEtMIIBKTAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFO7S2TbvL1kel0QH+sYfjD6v2L7oMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAmBgNV
HREEHzAdghtob21lYXNzaXN0YW50LmhpbGxpb24uY28udWswPAYDVR0fBDUwMzAx
oC+gLYYraHR0cDovL2NybC5jbG91ZGZsYXJlLmNvbS9vcmlnaW5fZWNjX2NhLmNy
bDAKBggqhkjOPQQDAgNJADBGAiEAgaiFVCBLVYKjTJV67qKOg1R1GBVszNF+9PCi
ZejJcjwCIQDtl9S3zCl/h8/7uYfk8dHg0Y6kwd5GVuu6HE67GWJ2Yg==
-----END CERTIFICATE-----

View File

@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDGzCCAsGgAwIBAgIUFUDTvq6L7SR3qKxaNh77g3XkJk8wCgYIKoZIzj0EAwIw
gY8xCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMTgwNgYDVQQL
Ey9DbG91ZEZsYXJlIE9yaWdpbiBTU0wgRUNDIENlcnRpZmljYXRlIEF1dGhvcml0
eTAeFw0yNDA0MTMyMTQ2MDBaFw0zOTA0MTAyMTQ2MDBaMGIxGTAXBgNVBAoTEENs
b3VkRmxhcmUsIEluYy4xHTAbBgNVBAsTFENsb3VkRmxhcmUgT3JpZ2luIENBMSYw
JAYDVQQDEx1DbG91ZEZsYXJlIE9yaWdpbiBDZXJ0aWZpY2F0ZTBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABGpSYrOqMuzCfE6qdpXqFze8RxWDcDSUFRYmotnp4cyK
i6ISovoK7YDKarrHRIvIrsNBaqk+0hjZpOhN/XpU16SjggElMIIBITAOBgNVHQ8B
Af8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwGA1UdEwEB
/wQCMAAwHQYDVR0OBBYEFLoqUdEVGspJs/SGcV7pf2bCzqTrMB8GA1UdIwQYMBaA
FIUwXTsqcNTt1ZJnB/3rObQaDjinMEQGCCsGAQUFBwEBBDgwNjA0BggrBgEFBQcw
AYYoaHR0cDovL29jc3AuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYTAeBgNV
HREEFzAVghNsaW5rcy5oaWxsaW9uLmNvLnVrMDwGA1UdHwQ1MDMwMaAvoC2GK2h0
dHA6Ly9jcmwuY2xvdWRmbGFyZS5jb20vb3JpZ2luX2VjY19jYS5jcmwwCgYIKoZI
zj0EAwIDSAAwRQIhANh3Ds0ZSZp3rEZ46z4sBp+WNQejnDhTCXt2OIRiCrecAiAB
oe21Oz1Pmqv0htFxNf1YbkgJMCoGfENlViuR0cUAJg==
-----END CERTIFICATE-----

View File

@ -10,82 +10,78 @@ in
};
config = lib.mkIf cfg.enable {
users.users.caddy.extraGroups = [ "mastodon" ];
age.secrets =
let
mkSecret = domain: {
name = "caddy/${domain}.pem";
value = {
file = ../../secrets/certs/${domain}.pem.age;
owner = config.services.caddy.user;
group = config.services.caddy.group;
};
};
in
builtins.listToAttrs (builtins.map mkSecret [
"hillion.co.uk"
"blog.hillion.co.uk"
"gitea.hillion.co.uk"
"homeassistant.hillion.co.uk"
"links.hillion.co.uk"
]);
custom.www.www-repo.enable = true;
services.caddy = {
enable = true;
package = pkgs.unstable.caddy;
virtualHosts."hillion.co.uk".extraConfig = ''
handle /.well-known/* {
respond /.well-known/matrix/server "{\"m.server\": \"matrix.hillion.co.uk:443\"}" 200
respond 404
}
globalConfig = ''
email acme@hillion.co.uk
'';
handle {
redir https://blog.hillion.co.uk{uri}
}
'';
virtualHosts."blog.hillion.co.uk".extraConfig = ''
root * /var/www/blog.hillion.co.uk
file_server
'';
virtualHosts."gitea.hillion.co.uk".extraConfig = ''
reverse_proxy http://gitea.gitea.ts.hillion.co.uk:3000
'';
virtualHosts."homeassistant.hillion.co.uk".extraConfig = ''
reverse_proxy http://homeassistant.homeassistant.ts.hillion.co.uk:8123
'';
virtualHosts."emby.hillion.co.uk".extraConfig = ''
reverse_proxy http://plex.mediaserver.ts.hillion.co.uk:8096
'';
virtualHosts."matrix.hillion.co.uk".extraConfig = ''
reverse_proxy http://${locations.services.matrix}:8008
'';
virtualHosts."unifi.hillion.co.uk".extraConfig = ''
reverse_proxy https://unifi.unifi.ts.hillion.co.uk:8443 {
transport http {
tls_insecure_skip_verify
virtualHosts = {
"hillion.co.uk".extraConfig = ''
tls ${./certs/hillion.co.uk.pem} ${config.age.secrets."caddy/hillion.co.uk.pem".path}
handle /.well-known/* {
header /.well-known/matrix/* Content-Type application/json
header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server "{\"m.server\": \"matrix.hillion.co.uk:443\"}" 200
respond /.well-known/matrix/client `${builtins.toJSON {
"m.homeserver" = { "base_url" = "https://matrix.hillion.co.uk"; };
"org.matrix.msc3575.proxy" = { "url" = "https://matrix.hillion.co.uk"; };
}}` 200
respond 404
}
}
'';
virtualHosts."drone.hillion.co.uk".extraConfig = ''
reverse_proxy http://vm.strangervm.ts.hillion.co.uk:18733
'';
virtualHosts."social.hillion.co.uk".extraConfig = ''
handle_path /system/* {
file_server * {
root /var/lib/mastodon/public-system
}
}
handle /api/v1/streaming/* {
reverse_proxy unix//run/mastodon-streaming/streaming.socket
}
route * {
file_server * {
root ${pkgs.mastodon}/public
pass_thru
handle {
redir https://blog.hillion.co.uk{uri}
}
reverse_proxy * unix//run/mastodon-web/web.socket
}
handle_errors {
root * ${pkgs.mastodon}/public
rewrite 500.html
'';
"blog.hillion.co.uk".extraConfig = ''
tls ${./certs/blog.hillion.co.uk.pem} ${config.age.secrets."caddy/blog.hillion.co.uk.pem".path}
root * /var/www/blog.hillion.co.uk
file_server
}
encode gzip
header /* {
Strict-Transport-Security "max-age=31536000;"
}
header /emoji/* Cache-Control "public, max-age=31536000, immutable"
header /packs/* Cache-Control "public, max-age=31536000, immutable"
header /system/accounts/avatars/* Cache-Control "public, max-age=31536000, immutable"
header /system/media_attachments/files/* Cache-Control "public, max-age=31536000, immutable"
'';
'';
"homeassistant.hillion.co.uk".extraConfig = ''
tls ${./certs/homeassistant.hillion.co.uk.pem} ${config.age.secrets."caddy/homeassistant.hillion.co.uk.pem".path}
reverse_proxy http://${locations.services.homeassistant}:8123
'';
"gitea.hillion.co.uk".extraConfig = ''
tls ${./certs/gitea.hillion.co.uk.pem} ${config.age.secrets."caddy/gitea.hillion.co.uk.pem".path}
reverse_proxy http://${locations.services.gitea}:3000
'';
"matrix.hillion.co.uk".extraConfig = ''
reverse_proxy /_matrix/client/unstable/org.matrix.msc3575/sync http://${locations.services.matrix}:8009
reverse_proxy /_matrix/* http://${locations.services.matrix}:8008
reverse_proxy /_synapse/client/* http://${locations.services.matrix}:8008
'';
"links.hillion.co.uk".extraConfig = ''
tls ${./certs/links.hillion.co.uk.pem} ${config.age.secrets."caddy/links.hillion.co.uk.pem".path}
redir https://matrix.to/#/@jake:hillion.co.uk
'';
};
};
};
}

View File

@ -53,10 +53,10 @@ in
};
script = ''
if [ ! -d "${cfg.path}/.git" ] ; then
${pkgs.git}/bin/git clone ${cfg.remote} ${cfg.path}
if [ ! -d "${cfg.location}/.git" ] ; then
${pkgs.git}/bin/git clone ${cfg.remote} ${cfg.location}
else
cd ${cfg.path}
cd ${cfg.location}
${pkgs.git} remote set-url origin ${cfg.remote}
${pkgs.git}/bin/git fetch
${pkgs.git}/bin/git reset --hard origin/${cfg.branch}

View File

@ -1,13 +1,13 @@
{ stdenv, lib, fetchFromGitea, buildGoModule, ... }:
let
version = "1.82.1";
version = "1.84.2";
src = fetchFromGitea {
domain = "gitea.hillion.co.uk";
owner = "JakeHillion";
repo = "storj";
rev = "f75ec5ba34b2ccce005ebdb6fae697e0224998d9";
hash = "sha256-zUpzkdiAbE10fq1KDXEarPURqByD8JV0NkQ9iNxPlWI=";
rev = "5546e07191f01be3269d5ea2dbf5ebb908852288";
hash = "sha256-OpLxi84oS2sCUaZEuKTvbaygkxkRiXlAlRVQDV8VWHg=";
};
meta = with lib; {
description = "Storj is building a distributed cloud storage network.";
@ -25,7 +25,7 @@ in
buildGoModule rec {
pname = "storagenode";
inherit version src meta;
vendorHash = "sha256-Q9+uwFmPrffvQGT9dHxf0ilCcDeVhUxrJETsngwZUXA=";
vendorHash = "sha256-eSm1Bp+nycd1W9Tx5hvh/Ta3w9u1zsXZ4D77zAnViOA=";
subPackages = [
"cmd/storagenode"
"cmd/identity"

24
renovate.json Normal file
View File

@ -0,0 +1,24 @@
{
"nix": {
"enabled": true
},
"lockFileMaintenance": {
"enabled": true,
"schedule": ["* 2-5 * * *"]
},
"rebaseWhen": "behind-base-branch",
"packageRules": [
{
"matchManagers": ["github-actions"],
"automerge": true,
"schedule": [
"after 11pm on Monday",
"after 11pm on Thursday"
]
}
],
"extends": [
"config:recommended",
"helpers:pinGitHubActionDigests"
]
}

5
scripts/update_nixpkgs.sh Executable file
View File

@ -0,0 +1,5 @@
#!/bin/sh
set -xe
VERSION=`curl https://gitea.hillion.co.uk/JakeHillion/nixos/raw/branch/main/flake.lock | nix run nixpkgs#jq -- -r '.nodes."nixpkgs-unstable".locked.rev'`
nix registry add nixpkgs "github:NixOS/nixpkgs/${VERSION}"

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,20 @@
age-encryption.org/v1
-> ssh-rsa GxPFJQ
AaDBHnrzzyVgSLJfjuzNYVqOfGVoNYvtwOZ/8JWdzTiKjFjvFThTwRxXm04x4zV2
yDGl2YQn/SdA//tPt2aTt/HEr2vvfvTupf+p8dO81JsdQ2QNEwJq6GFPYvzwpUMt
2aY99IWgfZMnNdm1dD40UPRXthRy4neY64fLmrpNH9hM/Tj/O9L9aHo/Z8rROien
k/qN0uVWDlrxoCooZNmzuWe8VNE9PtEj07YBjUKY9frVpP38iL9hWZ435x59bRru
HTz/I1NEeKyCzUKDz562cmmPl1ihJkelSOLIS/SUL4CfbePt6lGeGgJ1UB1lLBlo
nOE79ekfh92wbYJWrogvFg
-> ssh-rsa K9mW1w
W34jfSpkxpJKesV7ZDl92bQRMtWWB7ht93n7+APJNL18VvLHmqDztkPQovd9FuKY
YLEt5qevncV/O2/f6QW87I0ySFT2tpFPflXOITk1INYH8Z/NPfGfHBgUpnl+vM4w
xVujFTnFXaFYBpoyOOl5VBLvFTlYvzXL/e2lYyZT/HGb2V43OHQe3dsMWhPKzKLL
2eg21XK/LzLFkYdpMwmt5bpVVgXB9kaB9fpmV9ZDtEYDO18/uQQI7Wdn+6XAzB+3
iJwIQaqL7YjaOwiF8u6NYOd4Qo7WNEd1WnO9/qIGMp9E4x5V3vToS9fePynE2PtH
Chcu3y8jT+Qpby+6joyLwQ
-> ssh-ed25519 iWiFbA fxhC7p0ywGIGmpio8x7yFktdB/JnKiiXJF3kvP2X2wk
Q2HJ78QRc3nZyyWgB/MjhcEHiKoXou/4421SvoTj9fM
--- OYElVzl6Gk/4ma/OgiU1Xtvg5+9Rtq/CIieG85QDOBI
Ì$y³6ãyE÷ì=qÉH®y IûŸ?R @ìG†³H<C2B3>ÓJf\JîaÛ¤i¨§2¿ÆÕe³º¯<C2BA>6 b%ÑBÜ|.ôgZE¬ßƒ˜ô3ürÄxŽœëë~\Ü\íW‡ïluӚàr$ <0C>ÕýÛá'ý¥ÿšaŽÄÙ1Çý—<C3BD>Tæm˜ºp<šIp5hw.Æ' ¼GÉÀ7*õ[â\7È*5ôò¤èÐÂãßÓŽör%u06Ó¦û"Ù´G“Àßœvzä6¤
Åךe%½¶û QékqôeDoM¹ŸØ(q&r@¶­Ï™?Fn ê_—€<18>\˜°ßÞ"<22>úDÍw½À|úOD]ïê@ö@Ê»Y<C2BB>Ó

View File

@ -1,22 +0,0 @@
age-encryption.org/v1
-> ssh-rsa GxPFJQ
B1tLU+ypxVOlO9jSZUvUwb69QrNk/rqJoYjdNSOJxSWk7+iX0jli0TrU8AePnfSn
NjemcJJCoaSf5q7RQJK9Gfvq6BE1z4EoablA3Sx9un/qqUJuIy3SiQhR5+y5bPD0
+8FzLznorSQR5tc1mQo82S1lv0ec8hqw2q13Kqm/09NIiZNKSLoHkp031q3VZbjC
XL2naNUqX4lNADqDxESbY5au3CsnBJGN2gX0syj0d1iRx0At2HJSR7gANCEYpWKI
nBF+5mlX7lbpb61CoDUiQSW4JiXCULz1kiR7WWJQBrlFryn4CJ2PAUJTKUfzKO8t
sgi02DX7frP4jMOt/Z5VMA
-> ssh-rsa K9mW1w
sPxe/ErVYvJmogtrhHikq7lz2c5jSYxb/mhDHdSAIQIV3b7zWOreEbLOxDnzr/K7
pSHTLTIWXxiCEWxunrhUccGHiBEoP40MkcYJnxyuG49Fu42I9K8Gsq4GI05zF9Kl
HKwVOwD7gF3QMgkDCxFuqCsmQLB11Evwc7NnbR0+Z3Y9o4FfP4SCXc87Ye+C9zn8
9dRxjpRo1Qz9WtW2VG+qdaWldwo0BLtQILDQoR08GW8D1CZvsqXuHsoGLCMoPcgk
H2TEwawh1V/bY/j0Y509sVQWn3FF27taqeEQYZOQOwUWNf10cAsDTDUjdYyc9fjJ
Hx62FPHP9wmGsViNhn4gbg
-> ssh-ed25519 O0LMHg 9CZI1FtkDLXaIdP9Qlx8O0hUfbdzfrJdK67ifPVDjQM
Zqd5xtiaDBi7IFnab2bZQVzzEX0YQDrPYfvur9N6JT8
-> I-grease V-d-})j
mxJ1WhR/NqdMpIbCaM2jxxmffNAy/t9vByK+c9FNIzb3t87VUcALJxuSmswdNVqs
1iL6
--- ZJ3HoKn0QVAGVsB57bTaCEU19BuLas9SLRjczmomG1g
ëxæÀ(¤G$_¤ßWbw -]³<55>q~ˆˆö`“ê¹ÄŽÌ-pQ,ID‰À@ÛïÞvf£FäøHÙ0Ù-ù:ð·ÄOñèÆA

Some files were not shown because too many files have changed in this diff Show More