Compare commits

..

140 Commits

Author SHA1 Message Date
fd6e7d7a86 Update flake.lock
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-30 16:22:07 +01:00
b23536ecc7 chore: adds discord and gitnuro flatpaks
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-10-30 16:22:03 +01:00
14e9c8d51c chore: remove old stuff
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 7s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-10-30 16:21:17 +01:00
c1c98fa007 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-28 08:36:44 +01:00
9c6e6fdf47 Add Vicinae installation and assets Ansible task
Include Vicinae setup in workstation playbook for non-WSL2 systems

Update flake.lock to newer nixpkgs revision
2025-10-28 08:36:26 +01:00
a11376fe96 Add monitoring countries to allowed_countries_codes list
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-26 00:24:17 +00:00
e14dd1d224 Add EU and trusted country lists for Caddy access control
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 54s
Python Lint Check / check-python (push) Successful in 21s
Define separate lists for EU and trusted countries in group vars. Update
Caddyfile template to support EU, trusted, and combined allow lists.
Switch Sathub domains to use combined country allow list.
2025-10-26 00:21:27 +00:00
5353981555 Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 42s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:09:31 +00:00
f9ce652dfc flake lock
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-26 00:09:15 +00:00
fe9dbca2db Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 02:08:31 +02:00
987166420a Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:06:13 +00:00
8ba47c2ebf Fix indentation in server.yml and add necesse service
Add become: true to JuiceFS stop/start tasks in redis.yml
2025-10-26 00:04:51 +00:00
8bfd8395f5 Add Discord environment variables and update data volumes paths 2025-10-26 00:04:41 +00:00
f0b15f77a1 Update nixpkgs input to latest commit 2025-10-26 00:04:19 +00:00
461d251356 Add Ansible role to deploy Necesse server with Docker 2025-10-26 00:04:14 +00:00
e57e9ee67c chore: update country allow list and add European allow option 2025-10-26 02:02:46 +02:00
f67b16f593 update flake locvk 2025-10-26 02:02:28 +02:00
5edd7c413e Update bash.nix to improve WSL Windows alias handling 2025-10-26 02:02:21 +02:00
cfc1188b5f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-23 13:43:38 +02:00
e2701dcdf4 Set executable permission for equibop.desktop and update bash.nix
Add BUN_INSTALL env var and include Bun bin in PATH
2025-10-23 13:43:26 +02:00
11af7f16e5 Set formatter to prettier and update format_on_save option 2025-10-23 13:38:16 +02:00
310fb92ec9 Add WSL aliases for Windows SSH and Zed
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 51s
Python Lint Check / check-python (push) Successful in 15s
2025-10-23 04:20:15 +02:00
fb1661386b chore: add Bun install path and prepend to PATH
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 8s
2025-10-22 17:57:12 +02:00
e1b07a6edf Add WSL support and fix config formatting
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 1m17s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-22 16:18:08 +02:00
f6a3f6d379 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles 2025-10-21 10:06:20 +02:00
77424506d6 Update Nextcloud config and flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 0s
Nix Format Check / check-format (push) Failing after 0s
Python Lint Check / check-python (push) Failing after 0s
2025-10-20 11:27:21 +02:00
1856b2fb9e adds fastmail app as flatpak 2025-10-20 11:27:00 +02:00
2173e37c0a refactor: update configuration for mennos-server and adjust related tasks
Some checks failed
Nix Format Check / check-format (push) Successful in 1m22s
Python Lint Check / check-python (push) Successful in 25s
Ansible Lint Check / check-ansible (push) Failing after 1h7m12s
2025-10-16 14:53:32 +02:00
ba2faf114d chore: update sathub config
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Successful in 5s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 15:04:46 +02:00
22b308803c fixes
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
2dfde555dd sathub fixes
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
436deb267e Add smart alias configuration for rtlsdr 2025-10-08 13:01:37 +02:00
e490405dc5 Update mennos-rtlsdr-pc home configuration to enable service
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:54:34 +02:00
1485f6c430 Add home configuration for mennos-rtlsdr-pc
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:38:12 +02:00
4c83707a03 Update Ansible inventory and playbook for new workstation; modify Git configuration for rebase settings
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-10-08 12:37:59 +02:00
f9f37f5819 Update flatpaks.yml
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-30 12:02:26 +02:00
44c4521cbe Remove unnecessary blank line before sathub.nl configuration in Caddyfile
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m10s
Python Lint Check / check-python (push) Successful in 6s
2025-09-29 02:53:35 +02:00
6c37372bc0 Remove unused obj.sathub.de configuration and caddy_network from MinIO service in Docker Compose
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 02:40:25 +02:00
3a22417315 Add CORS configuration to SatHub service for improved API access
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 8s
2025-09-29 01:29:55 +02:00
95bc4540db Add SatHub service deployment with Docker Compose and configuration
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 01:21:41 +02:00
902d797480 Refactor Cloudreve restart logic and update configs
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 5s
- Refactor Cloudreve tasks to use conditional restart
- Remove unused displayData from Dashy config
- Add NVM and Japanese input setup to bash.nix
2025-09-25 22:33:57 +02:00
e494369d11 Refactor formatting in update.py for improved readability
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-24 18:40:25 +02:00
78f3133a1d Fix formatting in Python workflow and update .gitignore to include Ansible files
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 18:35:53 +02:00
d28c0fce66 Refactor shell aliases to move folder navigation aliases to the utility section
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 18:32:05 +02:00
c6449affcc Rename zed.jsonc.j2 to zed.jsonc and fix trailing commas
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m8s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:12:34 +02:00
d33f367c5f Move Zed config to Ansible template with 1Password secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 16:10:44 +02:00
e5723e0964 Update zed.jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:04:45 +02:00
0bc609760c change zed settings to use jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 13:36:10 +02:00
edd8e90fec Add JetBrains Toolbox autostart and update Zed config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 13:24:43 +02:00
ee0c73f6de chore: add ssh config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 11:55:46 +02:00
60dd31fd1c Add --system flag to update system packages in update.py
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 17:26:44 +02:00
cc917eb375 Refactor bash config and env vars, set Zed as git editor
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
- Move environment variable exports from sessionVariables to bashrc
- Add more robust sourcing of .profile and .bashrc.local
- Improve SSH_AUTH_SOCK override logic for 1Password
- Remove redundant path and pyenv logic from profileExtra
- Set git core.editor to "zed" instead of "nvim"
- Add DOTFILES_PATH to global session variables
2025-09-23 17:13:24 +02:00
df0775f3b2 Update symlinks.yml
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 16:39:31 +02:00
5f312d3128 wtf 2025-09-23 16:36:08 +02:00
497fca49d9 linting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:29:47 +00:00
e3ea18c9da updated file
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Successful in 1m16s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:20:57 +02:00
6fcabcd1f3 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Successful in 1m17s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:16:09 +02:00
3e25210f4c remove stash, add bazarr, add cloudreve 2025-09-23 16:13:09 +02:00
5ff84a4c0d Remove GNOME extension management from workstation setup
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 14:09:30 +00:00
29a439d095 Add isServer option and conditionally enable Git signing
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:07:10 +00:00
cfb80bd819 linting 2025-09-23 14:06:26 +00:00
8971d087a3 Remove secrets and auto-start actions and update imports
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:59:48 +00:00
40063cfe6b Refactor for consistent string quoting and formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:53:29 +00:00
2e5a06e9d5 Remove mennos-vm from inventory and playbook tasks
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:51:42 +00:00
80ea4cd51b Remove VSCode config and update Zed symlink and settings
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
- Delete VSCode settings and argv files
- Rename Zed settings file and update symlink destination
- Add new Zed context servers and projects
- Change icon and theme settings for Zed
- Add .gitkeep to autostart directory
2025-09-23 13:39:09 +00:00
c659c599f4 fixed formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:35:37 +00:00
54fc080ef2 Remove debug tasks from global.yml and update git signing config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Failing after 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:32:48 +00:00
3d5ae84a25 Add SSH insteadOf rule for git.mvl.sh
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 30s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:21:16 +00:00
dd3753fab4 refactor 2025-09-23 13:20:00 +00:00
a04a4abef6 chore: replace prusaslicer for bambulab slicer since it supports the same printers and works better
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 1s
2025-09-10 12:15:06 +02:00
fd5cb7f163 feat: add 3D printing applications to desired Flatpaks 2025-09-10 12:02:41 +02:00
2e5d7d39ef chore: move scrcpy package to Home Manager
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1s
Nix Format Check / check-format (push) Failing after 1s
Python Lint Check / check-python (push) Failing after 1s
2025-09-09 15:51:49 +02:00
422509eecc Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 2s
2025-09-09 10:41:37 +02:00
c79142e117 Update editor settings and add new Zed projects 2025-09-09 10:41:03 +02:00
2834c1c34e Change VSCode theme to Catppuccin Latte and add new commands
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Failing after 1m16s
Python Lint Check / check-python (push) Failing after 6s
2025-09-04 14:10:21 +02:00
fe73569e0b Add Tdarr and Weather sections to Dashy config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-09-04 14:10:00 +02:00
08d233cae5 Add object storage volume for slow TV shows in Plex config
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:43 +02:00
91c11b0283 Update flake.lock for home-manager and nixpkgs revisions
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:33 +02:00
50b0844db8 Move Sabnzbd to its own network and expose port 7788
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-09-02 11:07:54 +02:00
ad8cb0702d fix: increase memory limit to 2G for arr-stack services
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-08-31 01:43:00 +02:00
216d215663 fix: set dashy default to sametab and add extra hosts for status
resolving of local services and add comfyui to dashy
2025-08-31 01:42:22 +02:00
707a3c0cb7 Add News section and update DiscountOffice config in Dashy
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 1m8s
Python Lint Check / check-python (push) Failing after 6s
Remove config change trigger from Dashy restart tasks
2025-08-29 19:21:30 +02:00
d82a7247cd fix: adds dashy config as managed config in ansible
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m9s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 17:06:51 +02:00
0b7e727fc9 Switch default model provider to copilot_chat
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m21s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 17:04:13 +02:00
a15d382c8e feat: adds dashy as docker service
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 13s
Nix Format Check / check-format (push) Failing after 1m10s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 16:56:41 +02:00
79425af4b0 fix: add /usr/bin/brave-browser as chrome exec for flutter
Some checks failed
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 6s
Ansible Lint Check / check-ansible (push) Failing after 9s
2025-08-27 16:26:23 +02:00
5ebb22182d re-enable Zen browser
Some checks failed
Nix Format Check / check-format (push) Failing after 1m7s
Ansible Lint Check / check-ansible (push) Failing after 9s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 14:54:38 +02:00
00cff8ba6a Add dev tools in Ansible and update Zed model config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
- Split Ubuntu/Debian package installation with apt conditional
- Include clang, cmake, ninja-build and other development packages
- Switch Zed default AI model to Google Gemini 2.5 via OpenRouter
2025-08-27 14:09:13 +02:00
34bbe5fcf6 remove rustc from nix, we should install this using curl --proto
Some checks failed
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Failing after 8s
'=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
2025-08-27 14:08:37 +02:00
7ada9c7fc4 remove useless comments vscode settings
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-08-27 14:08:15 +02:00
d52671ede7 disable zed telemetry 2025-08-27 14:08:01 +02:00
d9cbe590c5 Add execute permission to List.desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 13:55:45 +02:00
46a9f3e99b adds mennos-laptop as host, adds nextcloud, adds nil and nixd for zed
language servers to work. updates setup to support 25.04
2025-08-27 13:55:31 +02:00
2caea9b483 Update flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1m13s
Nix Format Check / check-format (push) Failing after 1m19s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 11:31:14 +02:00
7211afd592 Configure PHP and inlay hints in Zed editor
Enable inlay hints and add PHP language server config with license key
placeholder
2025-08-27 11:31:03 +02:00
716f6e4e0a fix: update icon theme in Zed settings to Catppuccin Macchiato
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m24s
Python Lint Check / check-python (push) Failing after 7s
2025-08-24 03:02:25 +02:00
df62070722 fix: correct zed alias assignment in .bashrc
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-08-24 02:54:08 +02:00
f1ca2ad1ba refactor: remove GPU related environment variable settings for mennos-desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 5s
2025-08-24 02:50:39 +02:00
37174d7ccc refactor: update inventory and configuration for desktop systems, replacing 'mennos-cachyos-desktop' with 'mennos-desktop'
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 57s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 7s
2025-08-24 02:44:45 +02:00
134eeb03cb fix: update permissions for io.github.mrvladus.List.desktop and remove Remmina Applet autostart entry
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-08-23 04:47:11 +02:00
8545837b50 fix: update flake.lock with latest revisions and hashes for home-manager and nixpkgs
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 7s
2025-08-23 04:39:23 +02:00
b5227230c0 feat: add memory limits to Docker services in various configurations 2025-08-23 04:39:17 +02:00
9c85d2eea6 feat: update Nextcloud client version and add Remmina Applet autostart entry
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 34s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
2025-08-23 04:30:06 +02:00
34999bdb19 fix: update reference path for Work VPN configuration in secrets.nix
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1s
Nix Format Check / check-format (push) Failing after 1s
Python Lint Check / check-python (push) Failing after 1s
2025-08-15 16:00:04 +02:00
c95b6520a5 feat: update SillyTavern startup command to use direct execution 2025-08-15 15:59:56 +02:00
d42efd6a66 feat: add Equibop Flatpak to desired system Flatpaks list 2025-08-15 15:59:48 +02:00
5edee32509 feat: add llm command for managing sillytavern and koboldcpp
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-08-13 13:26:27 +02:00
bbaa297c6b Update nixpkgs versions and add Kilo Code settings
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 30s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-08-13 13:25:51 +02:00
e274ae7ae1 Update smart-ssh config and improve logging output
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 13s
Nix Format Check / check-format (push) Failing after 24s
Python Lint Check / check-python (push) Failing after 7s
2025-07-31 13:42:13 +02:00
6eb58e2c87 feat: add PKG_CONFIG_PATH for pkg-config and new Music folder sync in Nextcloud config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 24s
Python Lint Check / check-python (push) Failing after 7s
2025-07-31 03:05:07 +02:00
423af55031 fix: updated avorion to latest (2.5.9)
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 7s
2025-07-30 11:28:44 +02:00
cfe96bc3f9 Add Qdrant service configuration
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 7s
2025-07-29 16:11:33 +02:00
72aa7a4647 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 23s
Nix Format Check / check-format (push) Failing after 26s
Python Lint Check / check-python (push) Failing after 6s
2025-07-29 16:10:23 +02:00
58326c7f07 Add Qdrant service deployment to server config 2025-07-29 16:10:14 +02:00
cbbe7b21d8 Add Nextcloud-compatible task management apps
Install List and Iotas Flatpaks with autostart config for List
2025-07-29 16:10:03 +02:00
7d01d476b1 Add running state indicator for systemd timers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 27s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 23:16:54 +02:00
76c2586a21 Add Borg local sync system service and configuration
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 25s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 23:15:49 +02:00
63bd5ace82 Add Telegram notifications for Borg backup status
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 22:53:56 +02:00
4018399fd4 feat: adds borg, timers and systemd service support 2025-07-27 02:13:33 +02:00
47221e5803 Add Avorion game server configuration 2025-07-27 01:33:41 +02:00
564e45e099 feat: added a ssh utility that supports smart-aliases and background ssh
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
tunnels
2025-07-25 15:37:55 +02:00
f0bf6bc8aa wip
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 22s
Python Lint Check / check-python (push) Failing after 7s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-25 14:54:29 +02:00
b72f42ec5d Install Borg backup package on servers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 7s
2025-07-25 13:45:00 +02:00
21ea904169 Add Nextcloud config and ZapZap autostart
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 37s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 7s
2025-07-25 11:25:30 +02:00
4d0ff87ece Add Opera to 1Password allowed browsers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 18s
Python Lint Check / check-python (push) Failing after 6s
2025-07-23 16:28:49 +02:00
ef48cd2691 Port inuse function from bash to Go
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-07-23 16:28:38 +02:00
5bb3f5eee7 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Failing after 25s
Python Lint Check / check-python (push) Failing after 6s
2025-07-23 14:44:09 +02:00
37743d3512 Add Zed config and clean up aliases 2025-07-23 14:43:56 +02:00
2b1c714375 updated utils.yml to work with latest ansible
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:43:05 +02:00
d31d07e0a0 fix: clean & reformat gitconfig files
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 18s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:30:18 +02:00
dd1b961af0 fix: set default ssh sock based on what is available instead of forcing 1password locally
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:29:49 +02:00
c8444de0d5 fix: move ~/services to ~/.services
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 33s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:23:03 +02:00
d6600630bc Remove cloud server configuration files and references and add dynmamic
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
dns Shit
2025-07-22 23:26:31 +02:00
43cc186134 Fix incorrect Finland country code and updated home assitant domain
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 22:09:07 +02:00
4242e037b0 Remove redundant X-Forwarded headers and redirect domains
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 21:53:22 +02:00
506e568021 Add SG, AT and CH to allowed countries list
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-07-22 21:53:07 +02:00
97d616b7ed Cleanup
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 7s
2025-07-22 21:33:47 +02:00
9de6098001 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 19:23:41 +02:00
faebace545 refactor: migrate arr-stack to mennos-cachyos-desktop 2025-07-22 19:23:40 +02:00
03fd20cdac feat: update allowed countries 2025-07-22 19:23:25 +02:00
214 changed files with 8535 additions and 3599 deletions

480
.bashrc
View File

@@ -1,480 +0,0 @@
# HISTFILE Configuration (Bash equivalent)
HISTFILE=~/.bash_history
HISTSIZE=1000
HISTFILESIZE=2000 # Adjusted to match both histfile and size criteria
# GPU Related shenanigans
if [ "$(hostname)" = "mennos-desktop" ]; then
export DRI_PRIME=1
export MESA_VK_DEVICE_SELECT=1002:744c
fi
if [ -f /etc/os-release ]; then
distro=$(awk -F= '/^NAME/{print $ssss2}' /etc/os-release | tr -d '"')
if [[ "$distro" == *"Pop!_OS"* ]]; then
export CGO_CFLAGS="-I/usr/include"
fi
fi
# For microsoft-standard-WSL2 in uname -a
if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then
source $HOME/.agent-bridge.sh
alias winget='winget.exe'
fi
# Docker Compose Alias (Mostly for old shell scripts)
alias docker-compose='docker compose'
# Modern tools aliases
alias l="eza --header --long --git --group-directories-first --group --icons --color=always --sort=name --hyperlink -o --no-permissions"
alias ll='l'
alias la='l -a'
alias cat='bat'
alias du='dust'
alias df='duf'
alias augp='sudo apt update && sudo apt upgrade -y && sudo apt autopurge -y && sudo apt autoclean'
# Docker Aliases
alias d='docker'
alias dc='docker compose'
alias dce='docker compose exec'
alias dcl='docker compose logs'
alias dcd='docker compose down'
alias dcu='docker compose up'
alias dcp='docker compose ps'
alias dcps='docker compose ps'
alias dcpr='dcp && dcd && dcu -d && dcl -f'
alias dcr='dcd && dcu -d && dcl -f'
alias ddpul='docker compose down && docker compose pull && docker compose up -d && docker compose logs -f'
alias docker-nuke='docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker system prune --all --volumes --force && docker volume prune --force'
# Git aliases
alias g='git'
alias gg='git pull'
alias gl='git log --stat'
alias gp='git push'
alias gs='git status -s'
alias gst='git status'
alias ga='git add'
alias gc='git commit'
alias gcm='git commit -m'
alias gco='git checkout'
alias gcb='git checkout -b'
# Kubernetes aliases (Minikube)
alias kubectl="minikube kubectl --"
# netstat port in use check
alias port='netstat -atupn | grep LISTEN'
# Check if a specific port is in use with detailed process information
inuse() {
# Color definitions
local RED='\033[0;31m'
local GREEN='\033[0;32m'
local YELLOW='\033[1;33m'
local BLUE='\033[0;34m'
local CYAN='\033[0;36m'
local BOLD='\033[1m'
local NC='\033[0m' # No Color
# Input validation
if [ $# -eq 0 ]; then
echo -e "${RED}Usage:${NC} inuse <port_number>"
echo -e "${YELLOW} inuse --list${NC}"
echo -e "${YELLOW} inuse --help${NC}"
echo -e "${YELLOW}Example:${NC} inuse 80"
echo -e "${YELLOW} inuse --list${NC}"
return 1
fi
# Handle --help option
if [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
echo -e "${CYAN}${BOLD}inuse - Check if a port is in use${NC}"
echo
echo -e "${BOLD}USAGE:${NC}"
echo -e " inuse <port_number> Check if a specific port is in use"
echo -e " inuse --list, -l List all Docker services with listening ports"
echo -e " inuse --help, -h Show this help message"
echo
echo -e "${BOLD}EXAMPLES:${NC}"
echo -e " ${GREEN}inuse 80${NC} Check if port 80 is in use"
echo -e " ${GREEN}inuse 3000${NC} Check if port 3000 is in use"
echo -e " ${GREEN}inuse --list${NC} Show all Docker services with ports"
echo
echo -e "${BOLD}DESCRIPTION:${NC}"
echo -e " The inuse function checks if a specific port is in use and identifies"
echo -e " the process using it. It can detect regular processes, Docker containers"
echo -e " with published ports, and containers using host networking."
echo
echo -e "${BOLD}OUTPUT:${NC}"
echo -e " ${GREEN}${NC} Port is in use - shows process name, PID, and Docker info if applicable"
echo -e " ${RED}${NC} Port is free"
echo -e " ${YELLOW}${NC} Port is in use but process cannot be identified"
echo
return 0
fi
# Handle --list option
if [ "$1" = "--list" ] || [ "$1" = "-l" ]; then
if ! command -v docker >/dev/null 2>&1; then
echo -e "${RED}Error:${NC} Docker is not available"
return 1
fi
echo -e "${CYAN}${BOLD}Docker Services with Listening Ports:${NC}"
echo
# Get all running containers
local containers=$(docker ps --format "{{.Names}}" 2>/dev/null)
if [ -z "$containers" ]; then
echo -e "${YELLOW}No running Docker containers found${NC}"
return 0
fi
local found_services=false
while IFS= read -r container; do
# Get port mappings for this container
local ports=$(docker port "$container" 2>/dev/null)
if [ -n "$ports" ]; then
# Get container image name (clean it up)
local image=$(docker inspect "$container" 2>/dev/null | grep -o '"Image": *"[^"]*"' | cut -d'"' -f4 | head -1)
local clean_image=$(echo "$image" | sed 's/sha256:[a-f0-9]*/[image-hash]/' | sed 's/^.*\///')
echo -e "${GREEN}📦 ${BOLD}$container${NC} ${CYAN}($clean_image)${NC}"
# Parse and display ports nicely
echo "$ports" | while IFS= read -r port_line; do
if [[ "$port_line" =~ ([0-9]+)/(tcp|udp).*0\.0\.0\.0:([0-9]+) ]]; then
local container_port="${BASH_REMATCH[1]}"
local protocol="${BASH_REMATCH[2]}"
local host_port="${BASH_REMATCH[3]}"
echo -e "${CYAN} ├─ Port ${BOLD}$host_port${NC}${CYAN}$container_port ($protocol)${NC}"
elif [[ "$port_line" =~ ([0-9]+)/(tcp|udp).*\[::\]:([0-9]+) ]]; then
local container_port="${BASH_REMATCH[1]}"
local protocol="${BASH_REMATCH[2]}"
local host_port="${BASH_REMATCH[3]}"
echo -e "${CYAN} ├─ Port ${BOLD}$host_port${NC}${CYAN}$container_port ($protocol) [IPv6]${NC}"
fi
done
echo
found_services=true
fi
done <<< "$containers"
# Also check for host networking containers
local host_containers=$(docker ps --format "{{.Names}}" --filter "network=host" 2>/dev/null)
if [ -n "$host_containers" ]; then
echo -e "${YELLOW}${BOLD}Host Networking Containers:${NC}"
while IFS= read -r container; do
local image=$(docker inspect "$container" 2>/dev/null | grep -o '"Image": *"[^"]*"' | cut -d'"' -f4 | head -1)
local clean_image=$(echo "$image" | sed 's/sha256:[a-f0-9]*/[image-hash]/' | sed 's/^.*\///')
echo -e "${YELLOW}🌐 ${BOLD}$container${NC} ${CYAN}($clean_image)${NC} ${YELLOW}- uses host networking${NC}"
done <<< "$host_containers"
echo
found_services=true
fi
if [ "$found_services" = false ]; then
echo -e "${YELLOW}No Docker services with exposed ports found${NC}"
fi
return 0
fi
local port="$1"
# Validate port number
if ! [[ "$port" =~ ^[0-9]+$ ]] || [ "$port" -lt 1 ] || [ "$port" -gt 65535 ]; then
echo -e "${RED}Error:${NC} Invalid port number. Must be between 1 and 65535."
return 1
fi
# Check if port is in use first
local port_in_use=false
if command -v ss >/dev/null 2>&1; then
if ss -tulpn 2>/dev/null | grep -q ":$port "; then
port_in_use=true
fi
elif command -v netstat >/dev/null 2>&1; then
if netstat -tulpn 2>/dev/null | grep -q ":$port "; then
port_in_use=true
fi
fi
if [ "$port_in_use" = false ]; then
echo -e "${RED}✗ Port $port is FREE${NC}"
return 1
fi
# Port is in use, now find what's using it
local found_process=false
# Method 1: Try netstat first (most reliable for PID info)
if command -v netstat >/dev/null 2>&1; then
local netstat_result=$(netstat -tulpn 2>/dev/null | grep ":$port ")
if [ -n "$netstat_result" ]; then
while IFS= read -r line; do
local pid=$(echo "$line" | awk '{print $7}' | cut -d'/' -f1)
local process_name=$(echo "$line" | awk '{print $7}' | cut -d'/' -f2)
local protocol=$(echo "$line" | awk '{print $1}')
if [[ "$pid" =~ ^[0-9]+$ ]] && [ -n "$process_name" ]; then
# Check if it's a Docker container
local docker_info=""
if command -v docker >/dev/null 2>&1; then
# Check for docker-proxy
if [ "$process_name" = "docker-proxy" ]; then
local container_name=$(docker ps --format "{{.Names}}" --filter "publish=$port" 2>/dev/null | head -1)
if [ -n "$container_name" ]; then
docker_info=" ${CYAN}(Docker: $container_name)${NC}"
else
docker_info=" ${CYAN}(Docker proxy)${NC}"
fi
else
# Check if process is in a container by examining cgroup
if [ -f "/proc/$pid/cgroup" ] && grep -q docker "/proc/$pid/cgroup" 2>/dev/null; then
local container_id=$(cat "/proc/$pid/cgroup" 2>/dev/null | grep docker | grep -o '[a-f0-9]\{64\}' | head -1)
if [ -n "$container_id" ]; then
local container_name=$(docker inspect "$container_id" 2>/dev/null | grep -o '"Name": *"[^"]*"' | cut -d'"' -f4 | sed 's/^\/*//' | head -1)
if [ -n "$container_name" ]; then
docker_info=" ${CYAN}(Docker: $container_name)${NC}"
else
docker_info=" ${CYAN}(Docker: ${container_id:0:12})${NC}"
fi
fi
fi
fi
fi
echo -e "${GREEN}✓ Port $port ($protocol) in use by ${BOLD}$process_name${NC} ${GREEN}as PID ${BOLD}$pid${NC}$docker_info"
found_process=true
fi
done <<< "$netstat_result"
fi
fi
# Method 2: Try ss if netstat didn't work
if [ "$found_process" = false ] && command -v ss >/dev/null 2>&1; then
local ss_result=$(ss -tulpn 2>/dev/null | grep ":$port ")
if [ -n "$ss_result" ]; then
while IFS= read -r line; do
local pid=$(echo "$line" | grep -o 'pid=[0-9]*' | cut -d'=' -f2)
local protocol=$(echo "$line" | awk '{print $1}')
if [[ "$pid" =~ ^[0-9]+$ ]]; then
local process_name=$(ps -p "$pid" -o comm= 2>/dev/null)
if [ -n "$process_name" ]; then
# Check for Docker container
local docker_info=""
if command -v docker >/dev/null 2>&1; then
if [ "$process_name" = "docker-proxy" ]; then
local container_name=$(docker ps --format "{{.Names}}" --filter "publish=$port" 2>/dev/null | head -1)
if [ -n "$container_name" ]; then
docker_info=" ${CYAN}(Docker: $container_name)${NC}"
else
docker_info=" ${CYAN}(Docker proxy)${NC}"
fi
elif [ -f "/proc/$pid/cgroup" ] && grep -q docker "/proc/$pid/cgroup" 2>/dev/null; then
local container_id=$(cat "/proc/$pid/cgroup" 2>/dev/null | grep docker | grep -o '[a-f0-9]\{64\}' | head -1)
if [ -n "$container_id" ]; then
local container_name=$(docker inspect "$container_id" 2>/dev/null | grep -o '"Name": *"[^"]*"' | cut -d'"' -f4 | sed 's/^\/*//' | head -1)
if [ -n "$container_name" ]; then
docker_info=" ${CYAN}(Docker: $container_name)${NC}"
else
docker_info=" ${CYAN}(Docker: ${container_id:0:12})${NC}"
fi
fi
fi
fi
echo -e "${GREEN}✓ Port $port ($protocol) in use by ${BOLD}$process_name${NC} ${GREEN}as PID ${BOLD}$pid${NC}$docker_info"
found_process=true
fi
fi
done <<< "$ss_result"
fi
fi
# Method 3: Try fuser as last resort
if [ "$found_process" = false ] && command -v fuser >/dev/null 2>&1; then
local fuser_pids=$(fuser "$port/tcp" 2>/dev/null)
if [ -n "$fuser_pids" ]; then
for pid in $fuser_pids; do
if [[ "$pid" =~ ^[0-9]+$ ]]; then
local process_name=$(ps -p "$pid" -o comm= 2>/dev/null)
if [ -n "$process_name" ]; then
echo -e "${GREEN}✓ Port $port (tcp) in use by ${BOLD}$process_name${NC} ${GREEN}as PID ${BOLD}$pid${NC}"
found_process=true
break
fi
fi
done
fi
fi
# Method 4: Check for Docker containers more accurately
if [ "$found_process" = false ] && command -v docker >/dev/null 2>&1; then
# First, try to find containers with published ports matching our port
local container_with_port=$(docker ps --format "{{.Names}}" --filter "publish=$port" 2>/dev/null | head -1)
if [ -n "$container_with_port" ]; then
local image=$(docker inspect "$container_with_port" 2>/dev/null | grep -o '"Image": *"[^"]*"' | cut -d'"' -f4 | head -1)
echo -e "${GREEN}✓ Port $port in use by Docker container ${BOLD}$container_with_port${NC} ${CYAN}(published port, image: $image)${NC}"
found_process=true
else
# Only check host networking containers if we haven't found anything else
local host_containers=$(docker ps --format "{{.Names}}" --filter "network=host" 2>/dev/null)
if [ -n "$host_containers" ]; then
local host_container_count=$(echo "$host_containers" | wc -l)
if [ "$host_container_count" -eq 1 ]; then
# Only one host networking container, likely candidate
local image=$(docker inspect "$host_containers" 2>/dev/null | grep -o '"Image": *"[^"]*"' | cut -d'"' -f4 | head -1)
echo -e "${YELLOW}⚠ Port $port possibly in use by Docker container ${BOLD}$host_containers${NC} ${CYAN}(host networking, image: $image)${NC}"
found_process=true
else
# Multiple host networking containers, can't determine which one
echo -e "${YELLOW}⚠ Port $port is in use, multiple Docker containers using host networking:${NC}"
while IFS= read -r container; do
local image=$(docker inspect "$container" 2>/dev/null | grep -o '"Image": *"[^"]*"' | cut -d'"' -f4 | head -1)
echo -e "${CYAN} - $container (image: $image)${NC}"
done <<< "$host_containers"
found_process=true
fi
fi
fi
fi
# If we still haven't found the process, show a generic message
if [ "$found_process" = false ]; then
echo -e "${YELLOW}⚠ Port $port is in use but unable to identify the process${NC}"
echo -e "${CYAN} This might be due to insufficient permissions or the process being in a different namespace${NC}"
fi
return 0
}
# random string (Syntax: random <length>)
alias random='openssl rand -base64'
# Alias for ls to l but only if it's an interactive shell because we don't want to override ls in scripts which could blow up in our face
if [ -t 1 ]; then
alias ls='l'
fi
# PATH Manipulation
export DOTFILES_PATH=$HOME/.dotfiles
export PATH=$PATH:$HOME/.local/bin
export PATH=$PATH:$HOME/.cargo/bin
export PATH=$PATH:$DOTFILES_PATH/bin
export PATH="/usr/bin:$PATH"
# Include spicetify if it exists
if [ -d "$HOME/.spicetify" ]; then
export PATH=$PATH:$HOME/.spicetify
fi
# Include pyenv if it exists
if [ -d "$HOME/.pyenv" ]; then
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
fi
# Include pnpm if it exists
if [ -d "$HOME/.local/share/pnpm" ]; then
export PATH=$PATH:$HOME/.local/share/pnpm
fi
# Miniconda
export PATH="$HOME/miniconda3/bin:$PATH"
# In case $HOME/.flutter/flutter/bin is found, we can add it to the PATH
if [ -d "$HOME/.flutter/flutter/bin" ]; then
export PATH=$PATH:$HOME/.flutter/flutter/bin
export PATH="$PATH":"$HOME/.pub-cache/bin"
# Flutter linux fixes:
export CPPFLAGS="-I/usr/include"
export LDFLAGS="-L/usr/lib/x86_64-linux-gnu -lbz2"
export PKG_CONFIG_PATH=/usr/lib/x86_64-linux-gnu/pkgconfig:$PKG_CONFIG_PATH
fi
# Add flatpak to XDG_DATA_DIRS
export XDG_DATA_DIRS=$XDG_DATA_DIRS:/usr/share:/var/lib/flatpak/exports/share:$HOME/.local/share/flatpak/exports/share
# Allow unfree nixos
export NIXPKGS_ALLOW_UNFREE=1
# Allow insecure nixpkgs
export NIXPKGS_ALLOW_INSECURE=1
# Tradaware / DiscountOffice Configuration
if [ -d "/home/menno/Projects/Work" ]; then
export TRADAWARE_DEVOPS=true
fi
# 1Password Source Plugin (Assuming bash compatibility)
if [ -f /home/menno/.config/op/plugins.sh ]; then
source /home/menno/.config/op/plugins.sh
fi
# Initialize starship if available
if ! command -v starship &> /dev/null; then
echo "FYI, starship not found"
else
export STARSHIP_ENABLE_RIGHT_PROMPT=true
export STARSHIP_ENABLE_BASH_CONTINUATION=true
eval "$(starship init bash)"
fi
# Read .op_sat
if [ -f ~/.op_sat ]; then
export OP_SERVICE_ACCOUNT_TOKEN=$(cat ~/.op_sat)
# Ensure .op_sat is 0600 and only readable by the owner
if [ "$(stat -c %a ~/.op_sat)" != "600" ]; then
echo "WARNING: ~/.op_sat is not 0600, please fix this!"
fi
if [ "$(stat -c %U ~/.op_sat)" != "$(whoami)" ]; then
echo "WARNING: ~/.op_sat is not owned by the current user, please fix this!"
fi
fi
# Source nix home-manager
if [ -f "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh"
fi
# Source ble.sh if it exists
if [[ -f "${HOME}/.nix-profile/share/blesh/ble.sh" ]]; then
source "${HOME}/.nix-profile/share/blesh/ble.sh"
# Custom function for fzf history search
function fzf_history_search() {
local selected
selected=$(history | fzf --tac --height=40% --layout=reverse --border --info=inline \
--query="$READLINE_LINE" \
--color 'fg:#ebdbb2,bg:#282828,hl:#fabd2f,fg+:#ebdbb2,bg+:#3c3836,hl+:#fabd2f' \
--color 'info:#83a598,prompt:#bdae93,spinner:#fabd2f,pointer:#83a598,marker:#fe8019,header:#665c54' \
| sed 's/^ *[0-9]* *//')
if [[ -n "$selected" ]]; then
READLINE_LINE="$selected"
READLINE_POINT=${#selected}
fi
ble-redraw-prompt
}
# Bind Ctrl+R to our custom function
bind -x '"\C-r": fzf_history_search'
fi
# In case a basrc.local exists, source it
if [ -f $HOME/.bashrc.local ]; then
source $HOME/.bashrc.local
fi
# Display a welcome message for interactive shells
if [ -t 1 ]; then
helloworld
fi

View File

@@ -3,7 +3,7 @@ name: Python Lint Check
on: on:
pull_request: pull_request:
push: push:
branches: [ master ] branches: [master]
jobs: jobs:
check-python: check-python:
@@ -29,7 +29,7 @@ jobs:
exit 0 exit 0
fi fi
pylint $python_files pylint --exit-zero $python_files
- name: Check Black formatting - name: Check Black formatting
run: | run: |

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
logs/* logs/*
**/__pycache__/ **/__pycache__/
.ansible/
.ansible/.lock

View File

@@ -1,16 +1,13 @@
# Setup # Setup
This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux. This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux.
Please install a clean version of either distro with GNOME and then follow the steps below. Please install a clean version of either distro and then follow the steps below.
## Installation ## Installation
### 0. Install distro ### 0. Install distro
Download the latest ISO from your desired distro and write it to a USB stick. Download the latest ISO from your desired distro and write it to a USB stick.
I'd recommend getting the GNOME version as it's easier to setup unless you're planning on setting up a server, in that case I recommend getting the server ISO for the specific distro.
#### Note: If you intend on using a desktop environment you should select the GNOME version as this dotfiles repository expects the GNOME desktop environment for various configurations
### 1. Clone dotfiles to home directory ### 1. Clone dotfiles to home directory
@@ -44,15 +41,6 @@ Run the `dotf update` command, although the setup script did most of the work so
dotf update dotf update
``` ```
### 5. Decrypt secrets
Either using 1Password or by manualling providing the decryption key you should decrypt the secrets.
Various configurations depend on the secrets to be decrypted such as the SSH keys, yubikey pam configuration and more.
```bash
dotf secrets decrypt
```
### 6. Profit ### 6. Profit
You should now have a fully setup system with all the configurations applied. You should now have a fully setup system with all the configurations applied.
@@ -65,12 +53,13 @@ Here are some paths that contain files named after the hostname of the system.
If you add a new system you should add the relevant files to these paths. If you add a new system you should add the relevant files to these paths.
- `config/ssh/authorized_keys`: Contains the public keys per hostname that will be symlinked to the `~/.ssh/authorized_keys` file. - `config/ssh/authorized_keys`: Contains the public keys per hostname that will be symlinked to the `~/.ssh/authorized_keys` file.
- `config/home-manager/flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration. - `flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration.
### Server reboots ### Server reboots
In case you reboot a server, it's likely that this runs JuiceFS. In case you reboot a server, it's likely that this runs JuiceFS.
To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online. To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online.
```bash ```bash
dotf service stop --all dotf service stop --all
df # confirm JuiceFS is mounted df # confirm JuiceFS is mounted
@@ -81,16 +70,19 @@ dotf service start --all
In case you need to adjust anything regarding the /mnt/object_storage JuiceFS. In case you need to adjust anything regarding the /mnt/object_storage JuiceFS.
Ensure to shut down all services: Ensure to shut down all services:
```bash ```bash
dotf service stop --all dotf service stop --all
``` ```
Unmount the volume: Unmount the volume:
```bash ```bash
sudo systemctl stop juicefs sudo systemctl stop juicefs
``` ```
And optionally if you're going to do something with metadata you might need to stop redis too. And optionally if you're going to do something with metadata you might need to stop redis too.
```bash ```bash
cd ~/services/juicefs-redis/ cd ~/services/juicefs-redis/
docker compose down --remove-orphans docker compose down --remove-orphans
@@ -103,6 +95,7 @@ To add a new system you should follow these steps:
1. Add the relevant files shown in the section above. 1. Add the relevant files shown in the section above.
2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system. 2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system.
3. Run `dotf update` to ensure the symlinks are properly updated/created. 3. Run `dotf update` to ensure the symlinks are properly updated/created.
--- ---
## Using 1Password SSH Agent with WSL2 (Windows 11) ## Using 1Password SSH Agent with WSL2 (Windows 11)
@@ -132,5 +125,6 @@ This setup allows you to use your 1Password-managed SSH keys inside WSL2. The WS
- If your 1Password keys are listed, the setup is complete. - If your 1Password keys are listed, the setup is complete.
#### References #### References
- [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m) - [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m)
- [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows) - [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows)

View File

@@ -0,0 +1,82 @@
---
flatpaks: false
install_ui_apps: false
# European countries for EU-specific access control
eu_countries_codes:
- AL # Albania
- AD # Andorra
- AM # Armenia
- AT # Austria
- AZ # Azerbaijan
# - BY # Belarus (Belarus is disabled due to geopolitical reasons)
- BE # Belgium
- BA # Bosnia and Herzegovina
- BG # Bulgaria
- HR # Croatia
- CY # Cyprus
- CZ # Czech Republic
- DK # Denmark
- EE # Estonia
- FI # Finland
- FR # France
- GE # Georgia
- DE # Germany
- GR # Greece
- HU # Hungary
- IS # Iceland
- IE # Ireland
- IT # Italy
- XK # Kosovo
- LV # Latvia
- LI # Liechtenstein
- LT # Lithuania
- LU # Luxembourg
- MK # North Macedonia
- MT # Malta
- MD # Moldova
- MC # Monaco
- ME # Montenegro
- NL # Netherlands
- NO # Norway
- PL # Poland
- PT # Portugal
- RO # Romania
# - RU # Russia (Russia is disabled due to geopolitical reasons)
- SM # San Marino
- RS # Serbia
- SK # Slovakia
- SI # Slovenia
- ES # Spain
- SE # Sweden
- CH # Switzerland
- TR # Turkey
- UA # Ukraine
- GB # United Kingdom
- VA # Vatican City
# Trusted non-EU countries for extended access control
trusted_countries_codes:
- US # United States
- AU # Australia
- NZ # New Zealand
- JP # Japan
# Countries that are allowed to access the server Caddy reverse proxy
allowed_countries_codes:
- US # United States
- GB # United Kingdom
- DE # Germany
- FR # France
- IT # Italy
- NL # Netherlands
- JP # Japan
- KR # South Korea
- CH # Switzerland
- AU # Australia (Added for UpDown.io to monitor server uptime)
- CA # Canada (Added for UpDown.io to monitor server uptime)
- FI # Finland (Added for UpDown.io to monitor server uptime)
- SG # Singapore (Added for UpDown.io to monitor server uptime)
# Enable/disable country blocking globally
enable_country_blocking: true

30
ansible/handlers/main.yml Normal file
View File

@@ -0,0 +1,30 @@
---
- name: Systemctl daemon-reload
become: true
ansible.builtin.systemd:
daemon_reload: true
- name: Restart SSH service
become: true
ansible.builtin.service:
name: ssh
state: restarted
enabled: true
- name: reload systemd
become: true
ansible.builtin.systemd:
daemon_reload: true
- name: restart borg-local-sync
become: true
ansible.builtin.systemd:
name: borg-local-sync.service
enabled: true
- name: restart borg-local-sync-timer
become: true
ansible.builtin.systemd:
name: borg-local-sync.timer
state: restarted
enabled: true

11
ansible/inventory.ini Normal file
View File

@@ -0,0 +1,11 @@
[workstations]
mennos-laptop ansible_connection=local
mennos-desktop ansible_connection=local
[servers]
mennos-vps ansible_connection=local
mennos-server ansible_connection=local
mennos-rtlsdr-pc ansible_connection=local
[wsl]
mennos-desktopw ansible_connection=local

19
ansible/playbook.yml Normal file
View File

@@ -0,0 +1,19 @@
---
- name: Configure all hosts
hosts: all
handlers:
- name: Import handler tasks
ansible.builtin.import_tasks: handlers/main.yml
gather_facts: true
tasks:
- name: Include global tasks
ansible.builtin.import_tasks: tasks/global/global.yml
- name: Include workstation tasks
ansible.builtin.import_tasks: tasks/workstations/workstation.yml
when: inventory_hostname in ['mennos-laptop', 'mennos-desktop']
- name: Include server tasks
ansible.builtin.import_tasks: tasks/servers/server.yml
when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc', 'mennos-desktopw']

View File

@@ -1,21 +1,9 @@
--- ---
- name: Include global symlinks tasks
ansible.builtin.import_tasks: tasks/global/symlinks.yml
- name: Gather package facts - name: Gather package facts
ansible.builtin.package_facts: ansible.builtin.package_facts:
manager: auto manager: auto
become: true become: true
- name: Debug ansible_facts for troubleshooting
ansible.builtin.debug:
msg: |
OS Family: {{ ansible_facts['os_family'] }}
Distribution: {{ ansible_facts['distribution'] }}
Package Manager: {{ ansible_pkg_mgr }}
Kernel: {{ ansible_kernel }}
tags: debug
- name: Include Tailscale tasks - name: Include Tailscale tasks
ansible.builtin.import_tasks: tasks/global/tailscale.yml ansible.builtin.import_tasks: tasks/global/tailscale.yml
become: true become: true
@@ -131,7 +119,7 @@
ansible.builtin.replace: ansible.builtin.replace:
path: /etc/sudoers path: /etc/sudoers
regexp: '^Defaults\s+env_reset(?!.*pwfeedback)' regexp: '^Defaults\s+env_reset(?!.*pwfeedback)'
replace: 'Defaults env_reset,pwfeedback' replace: "Defaults env_reset,pwfeedback"
validate: 'visudo -cf %s' validate: "visudo -cf %s"
become: true become: true
tags: sudoers tags: sudoers

View File

@@ -0,0 +1,62 @@
---
- name: Process utils files
block:
- name: Load DOTFILES_PATH environment variable
ansible.builtin.set_fact:
dotfiles_path: "{{ lookup('env', 'DOTFILES_PATH') }}"
become: false
- name: Ensure ~/.local/bin exists
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/.local/bin"
state: directory
mode: "0755"
become: false
- name: Scan utils folder for files
ansible.builtin.find:
paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: file
register: utils_files
become: false
- name: Scan utils folder for Go projects (directories with go.mod)
ansible.builtin.find:
paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: directory
recurse: true
register: utils_dirs
become: false
- name: Filter directories that contain go.mod files
ansible.builtin.stat:
path: "{{ item.path }}/go.mod"
loop: "{{ utils_dirs.files }}"
register: go_mod_check
become: false
- name: Create symlinks for utils scripts
ansible.builtin.file:
src: "{{ item.path }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ item.path | basename }}"
state: link
loop: "{{ utils_files.files }}"
when: not item.path.endswith('.go')
become: false
- name: Compile standalone Go files and place binaries in ~/.local/bin
ansible.builtin.command:
cmd: go build -o "{{ ansible_env.HOME }}/.local/bin/{{ item.path | basename | regex_replace('\.go$', '') }}" "{{ item.path }}"
loop: "{{ utils_files.files }}"
when: item.path.endswith('.go')
become: false
- name: Compile Go projects and place binaries in ~/.local/bin
ansible.builtin.command:
cmd: go build -o "{{ ansible_env.HOME }}/.local/bin/{{ item.item.path | basename }}" .
chdir: "{{ item.item.path }}"
loop: "{{ go_mod_check.results }}"
when: item.stat.exists
become: false
tags:
- utils

View File

@@ -0,0 +1,124 @@
# Dynamic DNS OnePassword Setup
This document explains how to set up the required OnePassword entries for the Dynamic DNS automation.
## Overview
The Dynamic DNS task automatically retrieves credentials from OnePassword using the Ansible OnePassword lookup plugin. This eliminates the need for vault files and provides better security.
## Required OnePassword Entries
### 1. CloudFlare API Token
**Location:** `CloudFlare API Token` in `Dotfiles` vault, field `password`
**Setup Steps:**
1. Go to [CloudFlare API Tokens](https://dash.cloudflare.com/profile/api-tokens)
2. Click "Create Token"
3. Use the "Edit zone DNS" template
4. Configure permissions:
- Zone: DNS: Edit
- Zone Resources: Include all zones (or specific zones for your domains)
5. Add IP address filtering if desired (optional but recommended)
6. Click "Continue to summary" and "Create Token"
7. Copy the token and save it in OnePassword:
- Title: `CloudFlare API Token`
- Vault: `Dotfiles`
- Field: `password` (this should be the main password field)
### 2. Telegram Bot Credentials
**Location:** `Telegram DynDNS Bot` in `Dotfiles` vault, fields `password` and `chat_id`
**Setup Steps:**
#### Create Telegram Bot:
1. Message [@BotFather](https://t.me/BotFather) on Telegram
2. Send `/start` then `/newbot`
3. Follow the prompts to create your bot
4. Save the bot token (format: `123456789:ABCdefGHijklMNopQRstUVwxyz`)
#### Get Chat ID:
1. Send any message to your new bot
2. Visit: `https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates`
3. Look for `"chat":{"id":YOUR_CHAT_ID}` in the response
4. Save the chat ID (format: `987654321` or `-987654321` for groups)
#### Save in OnePassword:
- Title: `Telegram DynDNS Bot`
- Vault: `Dotfiles`
- Fields:
- `password`: Your bot token (123456789:ABCdefGHijklMNopQRstUVwxyz)
- `chat_id`: Your chat ID (987654321)
## Verification
You can test that the OnePassword lookups work by running:
```bash
# Test CloudFlare token lookup
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'CloudFlare API Token', vault='Dotfiles', field='password') }}"
# Test Telegram bot token
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'Telegram DynDNS Bot', vault='Dotfiles', field='password') }}"
# Test Telegram chat ID
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'Telegram DynDNS Bot', vault='Dotfiles', field='chat_id') }}"
```
## Security Notes
- Credentials are never stored in version control
- Environment file (`~/.local/bin/dynamic-dns.env`) has 600 permissions
- OnePassword CLI must be authenticated before running Ansible
- Make sure to run `op signin` before executing the playbook
## Troubleshooting
### OnePassword CLI Not Authenticated
```bash
op signin
```
### Missing Fields in OnePassword
Ensure the exact field names match:
- CloudFlare: field must be named `password`
- Telegram: fields must be named `password` and `chat_id`
### Invalid CloudFlare Token
- Check token has `Zone:DNS:Edit` permissions
- Verify token is active in CloudFlare dashboard
- Test with: `curl -H "Authorization: Bearer YOUR_TOKEN" https://api.cloudflare.com/client/v4/user/tokens/verify`
### Telegram Not Working
- Ensure you've sent at least one message to your bot
- Verify chat ID format (numbers only, may start with -)
- Test with: `go run dynamic-dns-cf.go --test-telegram`
## Usage
Once set up, the dynamic DNS will automatically:
- Update DNS records every 15 minutes
- Send Telegram notifications when IP changes
- Log all activity to system journal (`journalctl -t dynamic-dns`)
## Domains Configured
The automation updates these domains:
- `vleeuwen.me`
- `mvl.sh`
- `mennovanleeuwen.nl`
To modify the domain list, edit the wrapper script at:
`~/.local/bin/dynamic-dns-update.sh`

View File

@@ -0,0 +1,903 @@
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"net/http"
"os"
"strings"
"time"
)
// CloudFlare API structures
type CloudFlareResponse struct {
Success bool `json:"success"`
Errors []CloudFlareError `json:"errors"`
Result json.RawMessage `json:"result"`
Messages []CloudFlareMessage `json:"messages"`
}
type CloudFlareError struct {
Code int `json:"code"`
Message string `json:"message"`
}
type CloudFlareMessage struct {
Code int `json:"code"`
Message string `json:"message"`
}
type DNSRecord struct {
ID string `json:"id"`
Type string `json:"type"`
Name string `json:"name"`
Content string `json:"content"`
TTL int `json:"ttl"`
ZoneID string `json:"zone_id"`
}
type Zone struct {
ID string `json:"id"`
Name string `json:"name"`
}
type TokenVerification struct {
ID string `json:"id"`
Status string `json:"status"`
}
type NotificationInfo struct {
RecordName string
OldIP string
NewIP string
IsNew bool
}
// Configuration
type Config struct {
APIToken string
RecordNames []string
IPSources []string
DryRun bool
Verbose bool
Force bool
TTL int
TelegramBotToken string
TelegramChatID string
Client *http.Client
}
// Default IP sources
var defaultIPSources = []string{
"https://ifconfig.co/ip",
"https://ip.seeip.org",
"https://ipv4.icanhazip.com",
"https://api.ipify.org",
}
func main() {
config := &Config{
Client: &http.Client{Timeout: 10 * time.Second},
}
// Command line flags
var ipSourcesFlag string
var recordsFlag string
var listZones bool
var testTelegram bool
flag.StringVar(&recordsFlag, "record", "", "DNS A record name(s) to update - comma-separated for multiple (required)")
flag.StringVar(&ipSourcesFlag, "ip-sources", "", "Comma-separated list of IP detection services (optional)")
flag.BoolVar(&config.DryRun, "dry-run", false, "Show what would be done without making changes")
flag.BoolVar(&config.Verbose, "verbose", false, "Enable verbose logging")
flag.BoolVar(&listZones, "list-zones", false, "List all accessible zones and exit")
flag.BoolVar(&config.Force, "force", false, "Force update even if IP hasn't changed")
flag.BoolVar(&testTelegram, "test-telegram", false, "Send a test Telegram notification and exit")
flag.IntVar(&config.TTL, "ttl", 300, "TTL for DNS record in seconds")
// Custom usage function
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "CloudFlare Dynamic DNS Tool\n\n")
fmt.Fprintf(os.Stderr, "Updates CloudFlare DNS A records with your current public IP address.\n")
fmt.Fprintf(os.Stderr, "Supports multiple records, dry-run mode, and Telegram notifications.\n\n")
fmt.Fprintf(os.Stderr, "USAGE:\n")
fmt.Fprintf(os.Stderr, " %s [OPTIONS]\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, "REQUIRED ENVIRONMENT VARIABLES:\n")
fmt.Fprintf(os.Stderr, " CLOUDFLARE_API_TOKEN CloudFlare API token with Zone:DNS:Edit permissions\n")
fmt.Fprintf(os.Stderr, " Get from: https://dash.cloudflare.com/profile/api-tokens\n\n")
fmt.Fprintf(os.Stderr, "OPTIONAL ENVIRONMENT VARIABLES:\n")
fmt.Fprintf(os.Stderr, " TELEGRAM_BOT_TOKEN Telegram bot token for notifications\n")
fmt.Fprintf(os.Stderr, " TELEGRAM_CHAT_ID Telegram chat ID to send notifications to\n\n")
fmt.Fprintf(os.Stderr, "OPTIONS:\n")
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, "\nEXAMPLES:\n")
fmt.Fprintf(os.Stderr, " # Update single record\n")
fmt.Fprintf(os.Stderr, " %s -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Update multiple records\n")
fmt.Fprintf(os.Stderr, " %s -record \"home.example.com,api.example.com,vpn.mydomain.net\"\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Dry run with verbose output\n")
fmt.Fprintf(os.Stderr, " %s -dry-run -verbose -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Force update even if IP hasn't changed\n")
fmt.Fprintf(os.Stderr, " %s -force -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Custom TTL and IP sources\n")
fmt.Fprintf(os.Stderr, " %s -record home.example.com -ttl 600 -ip-sources \"https://ifconfig.co/ip,https://api.ipify.org\"\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # List accessible CloudFlare zones\n")
fmt.Fprintf(os.Stderr, " %s -list-zones\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Test Telegram notifications\n")
fmt.Fprintf(os.Stderr, " %s -test-telegram\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, "SETUP:\n")
fmt.Fprintf(os.Stderr, " 1. Create CloudFlare API token:\n")
fmt.Fprintf(os.Stderr, " - Go to https://dash.cloudflare.com/profile/api-tokens\n")
fmt.Fprintf(os.Stderr, " - Use 'Edit zone DNS' template\n")
fmt.Fprintf(os.Stderr, " - Select your zones\n")
fmt.Fprintf(os.Stderr, " - Copy token and set CLOUDFLARE_API_TOKEN environment variable\n\n")
fmt.Fprintf(os.Stderr, " 2. Optional: Setup Telegram notifications:\n")
fmt.Fprintf(os.Stderr, " - Message @BotFather on Telegram to create a bot\n")
fmt.Fprintf(os.Stderr, " - Get your chat ID by messaging your bot, then visit:\n")
fmt.Fprintf(os.Stderr, " https://api.telegram.org/bot<BOT_TOKEN>/getUpdates\n")
fmt.Fprintf(os.Stderr, " - Set TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID environment variables\n\n")
fmt.Fprintf(os.Stderr, "NOTES:\n")
fmt.Fprintf(os.Stderr, " - Records can be in different CloudFlare zones\n")
fmt.Fprintf(os.Stderr, " - Only updates when IP actually changes (unless -force is used)\n")
fmt.Fprintf(os.Stderr, " - Supports both root domains and subdomains\n")
fmt.Fprintf(os.Stderr, " - Telegram notifications sent only when IP changes\n")
fmt.Fprintf(os.Stderr, " - Use -dry-run to test without making changes\n\n")
}
flag.Parse()
// Validate required arguments (unless listing zones or testing telegram)
if recordsFlag == "" && !listZones && !testTelegram {
fmt.Fprintf(os.Stderr, "Error: -record flag is required\n")
flag.Usage()
os.Exit(1)
}
// Parse record names
if recordsFlag != "" {
config.RecordNames = strings.Split(recordsFlag, ",")
// Trim whitespace from each record name
for i, record := range config.RecordNames {
config.RecordNames[i] = strings.TrimSpace(record)
}
}
// Get API token from environment
config.APIToken = os.Getenv("CLOUDFLARE_API_TOKEN")
if config.APIToken == "" {
fmt.Fprintf(os.Stderr, "Error: CLOUDFLARE_API_TOKEN environment variable is required\n")
fmt.Fprintf(os.Stderr, "Get your API token from: https://dash.cloudflare.com/profile/api-tokens\n")
fmt.Fprintf(os.Stderr, "Create a token with 'Zone:DNS:Edit' permissions for your zone\n")
os.Exit(1)
}
// Get optional Telegram credentials
config.TelegramBotToken = os.Getenv("TELEGRAM_BOT_TOKEN")
config.TelegramChatID = os.Getenv("TELEGRAM_CHAT_ID")
if config.Verbose && config.TelegramBotToken != "" && config.TelegramChatID != "" {
fmt.Println("Telegram notifications enabled")
}
// Parse IP sources
if ipSourcesFlag != "" {
config.IPSources = strings.Split(ipSourcesFlag, ",")
} else {
config.IPSources = defaultIPSources
}
if config.Verbose {
fmt.Printf("Config: Records=%v, TTL=%d, DryRun=%v, Force=%v, IPSources=%v\n",
config.RecordNames, config.TTL, config.DryRun, config.Force, config.IPSources)
}
// If testing telegram, do that and exit (skip API token validation)
if testTelegram {
if err := testTelegramNotification(config); err != nil {
fmt.Fprintf(os.Stderr, "Error testing Telegram: %v\n", err)
os.Exit(1)
}
return
}
// Validate API token
if err := validateToken(config); err != nil {
fmt.Fprintf(os.Stderr, "Error validating API token: %v\n", err)
os.Exit(1)
}
if config.Verbose {
fmt.Println("API token validated successfully")
}
// If listing zones, do that and exit
if listZones {
if err := listAllZones(config); err != nil {
fmt.Fprintf(os.Stderr, "Error listing zones: %v\n", err)
os.Exit(1)
}
return
}
// Get current public IP
currentIP, err := getCurrentIP(config)
if err != nil {
fmt.Fprintf(os.Stderr, "Error getting current IP: %v\n", err)
os.Exit(1)
}
if config.Verbose {
fmt.Printf("Current public IP: %s\n", currentIP)
fmt.Printf("Processing %d record(s)\n", len(config.RecordNames))
}
// Process each record
var totalUpdates int
var allNotifications []NotificationInfo
for _, recordName := range config.RecordNames {
if config.Verbose {
fmt.Printf("\n--- Processing record: %s ---\n", recordName)
}
// Find the zone for the record
zoneName, zoneID, err := findZoneForRecord(config, recordName)
if err != nil {
fmt.Fprintf(os.Stderr, "Error finding zone for %s: %v\n", recordName, err)
continue
}
if config.Verbose {
fmt.Printf("Found zone: %s (ID: %s)\n", zoneName, zoneID)
}
// Find existing DNS record
record, err := findDNSRecordByName(config, zoneID, recordName)
if err != nil {
fmt.Fprintf(os.Stderr, "Error finding DNS record %s: %v\n", recordName, err)
continue
}
// Compare IPs
if record != nil {
if record.Content == currentIP && !config.Force {
fmt.Printf("DNS record %s already points to %s - no update needed\n", recordName, currentIP)
continue
}
if config.Verbose {
if record.Content == currentIP {
fmt.Printf("DNS record %s already points to %s, but forcing update\n",
recordName, currentIP)
} else {
fmt.Printf("DNS record %s currently points to %s, needs update to %s\n",
recordName, record.Content, currentIP)
}
}
} else {
if config.Verbose {
fmt.Printf("DNS record %s does not exist, will create it\n", recordName)
}
}
// Update or create record
if config.DryRun {
if record != nil {
if record.Content == currentIP && config.Force {
fmt.Printf("DRY RUN: Would force update DNS record %s (already %s)\n",
recordName, currentIP)
} else {
fmt.Printf("DRY RUN: Would update DNS record %s from %s to %s\n",
recordName, record.Content, currentIP)
}
} else {
fmt.Printf("DRY RUN: Would create DNS record %s with IP %s\n",
recordName, currentIP)
}
// Collect notification info for dry-run
if record == nil || record.Content != currentIP || config.Force {
var oldIPForNotification string
if record != nil {
oldIPForNotification = record.Content
}
allNotifications = append(allNotifications, NotificationInfo{
RecordName: recordName,
OldIP: oldIPForNotification,
NewIP: currentIP,
IsNew: record == nil,
})
}
continue
}
var wasUpdated bool
var oldIP string
if record != nil {
oldIP = record.Content
err = updateDNSRecordByName(config, zoneID, record.ID, recordName, currentIP)
if err != nil {
fmt.Fprintf(os.Stderr, "Error updating DNS record %s: %v\n", recordName, err)
continue
}
fmt.Printf("Successfully updated DNS record %s to %s\n", recordName, currentIP)
wasUpdated = true
} else {
err = createDNSRecordByName(config, zoneID, recordName, currentIP)
if err != nil {
fmt.Fprintf(os.Stderr, "Error creating DNS record %s: %v\n", recordName, err)
continue
}
fmt.Printf("Successfully created DNS record %s with IP %s\n", recordName, currentIP)
wasUpdated = true
}
// Collect notification info for actual updates
if wasUpdated && (record == nil || oldIP != currentIP || config.Force) {
allNotifications = append(allNotifications, NotificationInfo{
RecordName: recordName,
OldIP: oldIP,
NewIP: currentIP,
IsNew: record == nil,
})
totalUpdates++
}
}
// Send batch notification if there were any changes
if len(allNotifications) > 0 {
sendBatchTelegramNotification(config, allNotifications, config.DryRun)
}
if !config.DryRun && config.Verbose {
fmt.Printf("\nProcessed %d record(s), %d update(s) made\n", len(config.RecordNames), totalUpdates)
}
}
func validateToken(config *Config) error {
req, err := http.NewRequest("GET", "https://api.cloudflare.com/client/v4/user/tokens/verify", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("token validation failed: %v", cfResp.Errors)
}
var tokenInfo TokenVerification
if err := json.Unmarshal(cfResp.Result, &tokenInfo); err != nil {
return err
}
if tokenInfo.Status != "active" {
return fmt.Errorf("token is not active, status: %s", tokenInfo.Status)
}
return nil
}
func getCurrentIP(config *Config) (string, error) {
var lastError error
for _, source := range config.IPSources {
if config.Verbose {
fmt.Printf("Trying IP source: %s\n", source)
}
resp, err := config.Client.Get(source)
if err != nil {
lastError = err
if config.Verbose {
fmt.Printf("Failed to get IP from %s: %v\n", source, err)
}
continue
}
body, err := io.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
lastError = err
continue
}
if resp.StatusCode != 200 {
lastError = fmt.Errorf("HTTP %d from %s", resp.StatusCode, source)
continue
}
ip := strings.TrimSpace(string(body))
if ip != "" {
return ip, nil
}
lastError = fmt.Errorf("empty response from %s", source)
}
return "", fmt.Errorf("failed to get IP from any source, last error: %v", lastError)
}
func findZoneForRecord(config *Config, recordName string) (string, string, error) {
// Extract domain from record name (e.g., "sub.example.com" -> try "example.com", "com")
parts := strings.Split(recordName, ".")
if config.Verbose {
fmt.Printf("Finding zone for record: %s\n", recordName)
}
for i := 0; i < len(parts); i++ {
zoneName := strings.Join(parts[i:], ".")
req, err := http.NewRequest("GET",
fmt.Sprintf("https://api.cloudflare.com/client/v4/zones?name=%s", zoneName), nil)
if err != nil {
continue
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
continue
}
var cfResp CloudFlareResponse
err = json.NewDecoder(resp.Body).Decode(&cfResp)
resp.Body.Close()
if err != nil || !cfResp.Success {
continue
}
var zones []Zone
if err := json.Unmarshal(cfResp.Result, &zones); err != nil {
continue
}
if len(zones) > 0 {
return zones[0].Name, zones[0].ID, nil
}
}
return "", "", fmt.Errorf("no zone found for record %s", recordName)
}
func findDNSRecordByName(config *Config, zoneID string, recordName string) (*DNSRecord, error) {
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records?type=A&name=%s",
zoneID, recordName)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return nil, err
}
if !cfResp.Success {
return nil, fmt.Errorf("API error: %v", cfResp.Errors)
}
var records []DNSRecord
if err := json.Unmarshal(cfResp.Result, &records); err != nil {
return nil, err
}
if len(records) == 0 {
return nil, nil // Record doesn't exist
}
return &records[0], nil
}
func updateDNSRecordByName(config *Config, zoneID, recordID, recordName, ip string) error {
data := map[string]interface{}{
"type": "A",
"name": recordName,
"content": ip,
"ttl": config.TTL,
}
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records/%s", zoneID, recordID)
req, err := http.NewRequest("PUT", url, bytes.NewBuffer(jsonData))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
return nil
}
func createDNSRecordByName(config *Config, zoneID, recordName, ip string) error {
data := map[string]interface{}{
"type": "A",
"name": recordName,
"content": ip,
"ttl": config.TTL,
}
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records", zoneID)
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
return nil
}
func listAllZones(config *Config) error {
req, err := http.NewRequest("GET", "https://api.cloudflare.com/client/v4/zones", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
var zones []Zone
if err := json.Unmarshal(cfResp.Result, &zones); err != nil {
return err
}
fmt.Printf("Found %d accessible zones:\n", len(zones))
for _, zone := range zones {
fmt.Printf(" - %s (ID: %s)\n", zone.Name, zone.ID)
}
if len(zones) == 0 {
fmt.Println("No zones found. Make sure your API token has Zone:Read permissions.")
}
return nil
}
func sendTelegramNotification(config *Config, record *DNSRecord, oldIP, newIP string, isDryRun bool) {
// Skip if Telegram is not configured
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return
}
var message string
dryRunPrefix := ""
if isDryRun {
dryRunPrefix = "🧪 DRY RUN - "
}
if record == nil {
message = fmt.Sprintf("%s🆕 DNS Record Created\n\n"+
"Record: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, "test-record", newIP, config.TTL)
} else {
message = fmt.Sprintf("%s🔄 IP Address Changed\n\n"+
"Record: %s\n"+
"Old IP: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, "test-record", oldIP, newIP, config.TTL)
}
// Prepare Telegram API request
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to marshal Telegram payload: %v\n", err)
}
return
}
// Send notification
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
if config.Verbose {
fmt.Printf("Failed to create Telegram request: %v\n", err)
}
return
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to send Telegram notification: %v\n", err)
}
return
}
defer resp.Body.Close()
if resp.StatusCode == 200 {
if config.Verbose {
fmt.Println("Telegram notification sent successfully")
}
} else {
if config.Verbose {
body, _ := io.ReadAll(resp.Body)
fmt.Printf("Telegram notification failed (HTTP %d): %s\n", resp.StatusCode, string(body))
}
}
}
func testTelegramNotification(config *Config) error {
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return fmt.Errorf("Telegram not configured. Set TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID environment variables")
}
fmt.Println("Testing Telegram notification...")
// Send a test message
message := "🧪 Dynamic DNS Test\n\n" +
"This is a test notification from your CloudFlare Dynamic DNS tool.\n\n" +
"✅ Telegram integration is working correctly!"
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
return fmt.Errorf("failed to marshal payload: %v", err)
}
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
return fmt.Errorf("failed to create request: %v", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return fmt.Errorf("failed to send request: %v", err)
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
if resp.StatusCode == 200 {
fmt.Println("✅ Test notification sent successfully!")
if config.Verbose {
fmt.Printf("Response: %s\n", string(body))
}
return nil
} else {
return fmt.Errorf("failed to send notification (HTTP %d): %s", resp.StatusCode, string(body))
}
}
func sendBatchTelegramNotification(config *Config, notifications []NotificationInfo, isDryRun bool) {
// Skip if Telegram is not configured
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return
}
if len(notifications) == 0 {
return
}
var message string
dryRunPrefix := ""
if isDryRun {
dryRunPrefix = "🧪 DRY RUN - "
}
if len(notifications) == 1 {
// Single record notification
notif := notifications[0]
if notif.IsNew {
message = fmt.Sprintf("%s🆕 DNS Record Created\n\n"+
"Record: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, notif.RecordName, notif.NewIP, config.TTL)
} else if notif.OldIP == notif.NewIP {
message = fmt.Sprintf("%s🔄 DNS Record Force Updated\n\n"+
"Record: %s\n"+
"IP: %s (unchanged)\n"+
"TTL: %d seconds\n"+
"Note: Forced update requested",
dryRunPrefix, notif.RecordName, notif.NewIP, config.TTL)
} else {
message = fmt.Sprintf("%s🔄 IP Address Changed\n\n"+
"Record: %s\n"+
"Old IP: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, notif.RecordName, notif.OldIP, notif.NewIP, config.TTL)
}
} else {
// Multiple records notification
var newCount, updatedCount int
for _, notif := range notifications {
if notif.IsNew {
newCount++
} else {
updatedCount++
}
}
message = fmt.Sprintf("%s📋 Multiple DNS Records Updated\n\n", dryRunPrefix)
if newCount > 0 {
message += fmt.Sprintf("🆕 Created: %d record(s)\n", newCount)
}
if updatedCount > 0 {
message += fmt.Sprintf("🔄 Updated: %d record(s)\n", updatedCount)
}
message += fmt.Sprintf("\nNew IP: %s\nTTL: %d seconds\n\nRecords:", notifications[0].NewIP, config.TTL)
for _, notif := range notifications {
if notif.IsNew {
message += fmt.Sprintf("\n• %s (new)", notif.RecordName)
} else if notif.OldIP == notif.NewIP {
message += fmt.Sprintf("\n• %s (forced)", notif.RecordName)
} else {
message += fmt.Sprintf("\n• %s (%s → %s)", notif.RecordName, notif.OldIP, notif.NewIP)
}
}
}
// Send the notification using the same logic as single notifications
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to marshal Telegram payload: %v\n", err)
}
return
}
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
if config.Verbose {
fmt.Printf("Failed to create Telegram request: %v\n", err)
}
return
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to send Telegram notification: %v\n", err)
}
return
}
defer resp.Body.Close()
if resp.StatusCode == 200 {
if config.Verbose {
fmt.Println("Telegram notification sent successfully")
}
} else {
if config.Verbose {
body, _ := io.ReadAll(resp.Body)
fmt.Printf("Telegram notification failed (HTTP %d): %s\n", resp.StatusCode, string(body))
}
}
}

View File

@@ -0,0 +1,748 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
)
// Color constants for terminal output
const (
Red = "\033[0;31m"
Green = "\033[0;32m"
Yellow = "\033[1;33m"
Blue = "\033[0;34m"
Cyan = "\033[0;36m"
Bold = "\033[1m"
NC = "\033[0m" // No Color
)
// ProcessInfo holds information about a process using a port
type ProcessInfo struct {
PID int
ProcessName string
Protocol string
DockerInfo string
}
// DockerContainer represents a Docker container
type DockerContainer struct {
Name string
Image string
Ports []PortMapping
Network string
}
// PortMapping represents a port mapping
type PortMapping struct {
ContainerPort int
HostPort int
Protocol string
IPv6 bool
}
func main() {
if len(os.Args) < 2 {
showUsage()
os.Exit(1)
}
arg := os.Args[1]
switch arg {
case "--help", "-h":
showHelp()
case "--list", "-l":
listDockerServices()
default:
port, err := strconv.Atoi(arg)
if err != nil || port < 1 || port > 65535 {
fmt.Printf("%sError:%s Invalid port number. Must be between 1 and 65535.\n", Red, NC)
os.Exit(1)
}
checkPort(port)
}
}
func showUsage() {
fmt.Printf("%sUsage:%s inuse <port_number>\n", Red, NC)
fmt.Printf("%s inuse --list%s\n", Yellow, NC)
fmt.Printf("%s inuse --help%s\n", Yellow, NC)
fmt.Printf("%sExample:%s inuse 80\n", Yellow, NC)
fmt.Printf("%s inuse --list%s\n", Yellow, NC)
}
func showHelp() {
fmt.Printf("%s%sinuse - Check if a port is in use%s\n\n", Cyan, Bold, NC)
fmt.Printf("%sUSAGE:%s\n", Bold, NC)
fmt.Printf(" inuse <port_number> Check if a specific port is in use\n")
fmt.Printf(" inuse --list, -l List all Docker services with listening ports\n")
fmt.Printf(" inuse --help, -h Show this help message\n\n")
fmt.Printf("%sEXAMPLES:%s\n", Bold, NC)
fmt.Printf(" %sinuse 80%s Check if port 80 is in use\n", Green, NC)
fmt.Printf(" %sinuse 3000%s Check if port 3000 is in use\n", Green, NC)
fmt.Printf(" %sinuse --list%s Show all Docker services with ports\n\n", Green, NC)
fmt.Printf("%sDESCRIPTION:%s\n", Bold, NC)
fmt.Printf(" The inuse function checks if a specific port is in use and identifies\n")
fmt.Printf(" the process using it. It can detect regular processes, Docker containers\n")
fmt.Printf(" with published ports, and containers using host networking.\n\n")
fmt.Printf("%sOUTPUT:%s\n", Bold, NC)
fmt.Printf(" %s✓%s Port is in use - shows process name, PID, and Docker info if applicable\n", Green, NC)
fmt.Printf(" %s✗%s Port is free\n", Red, NC)
fmt.Printf(" %s⚠%s Port is in use but process cannot be identified\n", Yellow, NC)
}
func listDockerServices() {
if !isDockerAvailable() {
fmt.Printf("%sError:%s Docker is not available\n", Red, NC)
os.Exit(1)
}
fmt.Printf("%s%sDocker Services with Listening Ports:%s\n\n", Cyan, Bold, NC)
containers := getRunningContainers()
if len(containers) == 0 {
fmt.Printf("%sNo running Docker containers found%s\n", Yellow, NC)
return
}
foundServices := false
for _, container := range containers {
if len(container.Ports) > 0 {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s📦 %s%s%s %s(%s)%s\n", Green, Bold, container.Name, NC, Cyan, cleanImage, NC)
for _, port := range container.Ports {
ipv6Marker := ""
if port.IPv6 {
ipv6Marker = " [IPv6]"
}
fmt.Printf("%s ├─ Port %s%d%s%s → %d (%s)%s%s\n",
Cyan, Bold, port.HostPort, NC, Cyan, port.ContainerPort, port.Protocol, ipv6Marker, NC)
}
fmt.Println()
foundServices = true
}
}
// Check for host networking containers
hostContainers := getHostNetworkingContainers()
if len(hostContainers) > 0 {
fmt.Printf("%s%sHost Networking Containers:%s\n", Yellow, Bold, NC)
for _, container := range hostContainers {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s🌐 %s%s%s %s(%s)%s %s- uses host networking%s\n",
Yellow, Bold, container.Name, NC, Cyan, cleanImage, NC, Yellow, NC)
}
fmt.Println()
foundServices = true
}
if !foundServices {
fmt.Printf("%sNo Docker services with exposed ports found%s\n", Yellow, NC)
}
}
func checkPort(port int) {
// Check if port is in use first
if !isPortInUse(port) {
fmt.Printf("%s✗ Port %d is FREE%s\n", Red, port, NC)
os.Exit(1)
}
// Port is in use, now find what's using it
process := findProcessUsingPort(port)
if process != nil {
dockerInfo := ""
if process.DockerInfo != "" {
dockerInfo = " " + process.DockerInfo
}
fmt.Printf("%s✓ Port %d (%s) in use by %s%s%s %sas PID %s%d%s%s\n",
Green, port, process.Protocol, Bold, process.ProcessName, NC, Green, Bold, process.PID, NC, dockerInfo)
return
}
// Check if it's a Docker container
containerInfo := findDockerContainerUsingPort(port)
if containerInfo != "" {
fmt.Printf("%s✓ Port %d in use by Docker container %s\n", Green, port, containerInfo)
return
}
// If we still haven't found the process, check for host networking containers more thoroughly
hostNetworkProcess := findHostNetworkingProcess(port)
if hostNetworkProcess != "" {
fmt.Printf("%s✓ Port %d likely in use by %s\n", Green, port, hostNetworkProcess)
return
}
// If we still haven't found the process
fmt.Printf("%s⚠ Port %d is in use but unable to identify the process%s\n", Yellow, port, NC)
if isDockerAvailable() {
hostContainers := getHostNetworkingContainers()
if len(hostContainers) > 0 {
fmt.Printf("%s Note: Found Docker containers using host networking:%s\n", Cyan, NC)
for _, container := range hostContainers {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s - %s (%s)%s\n", Cyan, container.Name, cleanImage, NC)
}
fmt.Printf("%s These containers share the host's network, so one of them might be using this port%s\n", Cyan, NC)
} else {
fmt.Printf("%s This might be due to insufficient permissions or the process being in a different namespace%s\n", Cyan, NC)
}
} else {
fmt.Printf("%s This might be due to insufficient permissions or the process being in a different namespace%s\n", Cyan, NC)
}
}
func isPortInUse(port int) bool {
// Try ss first
if isCommandAvailable("ss") {
cmd := exec.Command("ss", "-tulpn")
output, err := cmd.Output()
if err == nil {
portPattern := fmt.Sprintf(":%d ", port)
return strings.Contains(string(output), portPattern)
}
}
// Try netstat as fallback
if isCommandAvailable("netstat") {
cmd := exec.Command("netstat", "-tulpn")
output, err := cmd.Output()
if err == nil {
portPattern := fmt.Sprintf(":%d ", port)
return strings.Contains(string(output), portPattern)
}
}
return false
}
func findProcessUsingPort(port int) *ProcessInfo {
// Method 1: Try netstat
if process := tryNetstat(port); process != nil {
return process
}
// Method 2: Try ss
if process := trySS(port); process != nil {
return process
}
// Method 3: Try lsof
if process := tryLsof(port); process != nil {
return process
}
// Method 4: Try fuser
if process := tryFuser(port); process != nil {
return process
}
return nil
}
func tryNetstat(port int) *ProcessInfo {
if !isCommandAvailable("netstat") {
return nil
}
cmd := exec.Command("netstat", "-tulpn")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "netstat", "-tulpn")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portPattern := fmt.Sprintf(":%d ", port)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, portPattern) {
fields := strings.Fields(line)
if len(fields) >= 7 {
pidProcess := fields[6]
parts := strings.Split(pidProcess, "/")
if len(parts) >= 2 {
if pid, err := strconv.Atoi(parts[0]); err == nil {
processName := parts[1]
protocol := fields[0]
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: protocol,
DockerInfo: dockerInfo,
}
}
}
}
}
}
return nil
}
func trySS(port int) *ProcessInfo {
if !isCommandAvailable("ss") {
return nil
}
cmd := exec.Command("ss", "-tulpn")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "ss", "-tulpn")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portPattern := fmt.Sprintf(":%d ", port)
pidRegex := regexp.MustCompile(`pid=(\d+)`)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, portPattern) {
matches := pidRegex.FindStringSubmatch(line)
if len(matches) >= 2 {
if pid, err := strconv.Atoi(matches[1]); err == nil {
processName := getProcessName(pid)
if processName != "" {
fields := strings.Fields(line)
protocol := ""
if len(fields) > 0 {
protocol = fields[0]
}
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: protocol,
DockerInfo: dockerInfo,
}
}
}
}
}
}
return nil
}
func tryLsof(port int) *ProcessInfo {
if !isCommandAvailable("lsof") {
return nil
}
cmd := exec.Command("lsof", "-i", fmt.Sprintf(":%d", port), "-n", "-P")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "lsof", "-i", fmt.Sprintf(":%d", port), "-n", "-P")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, "LISTEN") {
fields := strings.Fields(line)
if len(fields) >= 2 {
processName := fields[0]
if pid, err := strconv.Atoi(fields[1]); err == nil {
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: "tcp",
DockerInfo: dockerInfo,
}
}
}
}
}
return nil
}
func tryFuser(port int) *ProcessInfo {
if !isCommandAvailable("fuser") {
return nil
}
cmd := exec.Command("fuser", fmt.Sprintf("%d/tcp", port))
output, err := cmd.Output()
if err != nil {
return nil
}
pids := strings.Fields(string(output))
for _, pidStr := range pids {
if pid, err := strconv.Atoi(strings.TrimSpace(pidStr)); err == nil {
processName := getProcessName(pid)
if processName != "" {
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: "tcp",
DockerInfo: "",
}
}
}
}
return nil
}
func getProcessName(pid int) string {
cmd := exec.Command("ps", "-p", strconv.Itoa(pid), "-o", "comm=")
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
func getDockerInfo(pid int, processName string, port int) string {
if !isDockerAvailable() {
return ""
}
// Check if it's docker-proxy (handle truncated names like "docker-pr")
if processName == "docker-proxy" || strings.HasPrefix(processName, "docker-pr") {
containerName := getContainerByPublishedPort(port)
if containerName != "" {
image := getContainerImage(containerName)
cleanImage := cleanImageName(image)
return fmt.Sprintf("%s(Docker: %s, image: %s)%s", Cyan, containerName, cleanImage, NC)
}
return fmt.Sprintf("%s(Docker proxy)%s", Cyan, NC)
}
// Check if process is in a Docker container using cgroup
containerInfo := getContainerByPID(pid)
if containerInfo != "" {
return fmt.Sprintf("%s(Docker: %s)%s", Cyan, containerInfo, NC)
}
// Check if this process might be in a host networking container
hostContainer := checkHostNetworkingContainer(pid, processName)
if hostContainer != "" {
return fmt.Sprintf("%s(Docker host network: %s)%s", Cyan, hostContainer, NC)
}
return ""
}
func getContainerByPID(pid int) string {
cgroupPath := fmt.Sprintf("/proc/%d/cgroup", pid)
file, err := os.Open(cgroupPath)
if err != nil {
return ""
}
defer file.Close()
scanner := bufio.NewScanner(file)
containerIDRegex := regexp.MustCompile(`[a-f0-9]{64}`)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, "docker") {
matches := containerIDRegex.FindStringSubmatch(line)
if len(matches) > 0 {
containerID := matches[0]
containerName := getContainerNameByID(containerID)
if containerName != "" {
return containerName
}
return containerID[:12]
}
}
}
return ""
}
func findDockerContainerUsingPort(port int) string {
if !isDockerAvailable() {
return ""
}
// Check for containers with published ports
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", fmt.Sprintf("publish=%d", port))
output, err := cmd.Output()
if err != nil {
return ""
}
containerName := strings.TrimSpace(string(output))
if containerName != "" {
image := getContainerImage(containerName)
cleanImage := cleanImageName(image)
return fmt.Sprintf("%s%s%s %s(published port, image: %s)%s", Bold, containerName, NC, Cyan, cleanImage, NC)
}
return ""
}
func isDockerAvailable() bool {
return isCommandAvailable("docker")
}
func isCommandAvailable(command string) bool {
_, err := exec.LookPath(command)
return err == nil
}
func getRunningContainers() []DockerContainer {
if !isDockerAvailable() {
return nil
}
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}")
output, err := cmd.Output()
if err != nil {
return nil
}
var containers []DockerContainer
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
containerName := strings.TrimSpace(scanner.Text())
if containerName != "" {
container := DockerContainer{
Name: containerName,
Image: getContainerImage(containerName),
Ports: getContainerPorts(containerName),
}
containers = append(containers, container)
}
}
return containers
}
func getHostNetworkingContainers() []DockerContainer {
if !isDockerAvailable() {
return nil
}
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", "network=host")
output, err := cmd.Output()
if err != nil {
return nil
}
var containers []DockerContainer
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
containerName := strings.TrimSpace(scanner.Text())
if containerName != "" {
container := DockerContainer{
Name: containerName,
Image: getContainerImage(containerName),
Network: "host",
}
containers = append(containers, container)
}
}
return containers
}
func getContainerImage(containerName string) string {
cmd := exec.Command("docker", "inspect", containerName)
output, err := cmd.Output()
if err != nil {
return ""
}
var inspectData []map[string]interface{}
if err := json.Unmarshal(output, &inspectData); err != nil {
return ""
}
if len(inspectData) > 0 {
if image, ok := inspectData[0]["Config"].(map[string]interface{})["Image"].(string); ok {
return image
}
}
return ""
}
func getContainerPorts(containerName string) []PortMapping {
cmd := exec.Command("docker", "port", containerName)
output, err := cmd.Output()
if err != nil {
return nil
}
var ports []PortMapping
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portRegex := regexp.MustCompile(`(\d+)/(tcp|udp) -> 0\.0\.0\.0:(\d+)`)
ipv6PortRegex := regexp.MustCompile(`(\d+)/(tcp|udp) -> \[::\]:(\d+)`)
for scanner.Scan() {
line := scanner.Text()
// Check for IPv4
if matches := portRegex.FindStringSubmatch(line); len(matches) >= 4 {
containerPort, _ := strconv.Atoi(matches[1])
protocol := matches[2]
hostPort, _ := strconv.Atoi(matches[3])
ports = append(ports, PortMapping{
ContainerPort: containerPort,
HostPort: hostPort,
Protocol: protocol,
IPv6: false,
})
}
// Check for IPv6
if matches := ipv6PortRegex.FindStringSubmatch(line); len(matches) >= 4 {
containerPort, _ := strconv.Atoi(matches[1])
protocol := matches[2]
hostPort, _ := strconv.Atoi(matches[3])
ports = append(ports, PortMapping{
ContainerPort: containerPort,
HostPort: hostPort,
Protocol: protocol,
IPv6: true,
})
}
}
return ports
}
func getContainerByPublishedPort(port int) string {
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", fmt.Sprintf("publish=%d", port))
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
func getContainerNameByID(containerID string) string {
cmd := exec.Command("docker", "inspect", containerID)
output, err := cmd.Output()
if err != nil {
return ""
}
var inspectData []map[string]interface{}
if err := json.Unmarshal(output, &inspectData); err != nil {
return ""
}
if len(inspectData) > 0 {
if name, ok := inspectData[0]["Name"].(string); ok {
return strings.TrimPrefix(name, "/")
}
}
return ""
}
func cleanImageName(image string) string {
// Remove SHA256 hashes
shaRegex := regexp.MustCompile(`sha256:[a-f0-9]*`)
cleaned := shaRegex.ReplaceAllString(image, "[image-hash]")
// Remove registry prefixes, keep only the last part
parts := strings.Split(cleaned, "/")
if len(parts) > 0 {
return parts[len(parts)-1]
}
return cleaned
}
func findHostNetworkingProcess(port int) string {
if !isDockerAvailable() {
return ""
}
// Get all host networking containers
hostContainers := getHostNetworkingContainers()
for _, container := range hostContainers {
// Check if this container might be using the port
if isContainerUsingPort(container.Name, port) {
cleanImage := cleanImageName(container.Image)
return fmt.Sprintf("%s%s%s %s(Docker host network: %s)%s", Bold, container.Name, NC, Cyan, cleanImage, NC)
}
}
return ""
}
func isContainerUsingPort(containerName string, port int) bool {
// Try to execute netstat inside the container to see if it's listening on the port
cmd := exec.Command("docker", "exec", containerName, "sh", "-c",
fmt.Sprintf("netstat -tlnp 2>/dev/null | grep ':%d ' || ss -tlnp 2>/dev/null | grep ':%d '", port, port))
output, err := cmd.Output()
if err != nil {
return false
}
return len(output) > 0
}
func checkHostNetworkingContainer(pid int, processName string) string {
if !isDockerAvailable() {
return ""
}
// Get all host networking containers and check if any match this process
hostContainers := getHostNetworkingContainers()
for _, container := range hostContainers {
// Try to find this process inside the container
cmd := exec.Command("docker", "exec", container.Name, "sh", "-c",
fmt.Sprintf("ps -o pid,comm | grep '%s' | grep -q '%d\\|%s'", processName, pid, processName))
err := cmd.Run()
if err == nil {
cleanImage := cleanImageName(container.Image)
return fmt.Sprintf("%s (%s)", container.Name, cleanImage)
}
}
return ""
}

298
ansible/tasks/global/utils/llm Executable file
View File

@@ -0,0 +1,298 @@
#!/bin/bash
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# Configuration
KOBOLD_PATH="/mnt/data/ai/llm/koboldcpp-linux-x64"
KOBOLD_MODEL="/mnt/data/ai/llm/Mistral-Small-24B-Instruct-2501-Q4_K_M.gguf" # Default model
SILLYTAVERN_SCREEN="sillytavern"
KOBOLD_SCREEN="koboldcpp"
# Function to check if a screen session exists
check_screen() {
screen -ls | grep -q "\.${1}\s"
}
# Function to list available models
list_models() {
echo -e "${BLUE}Available models:${NC}"
ls -1 /mnt/data/ai/llm/*.gguf | nl -w2 -s'. '
}
# Function to select a model
select_model() {
list_models
echo
read -p "Select model number (or press Enter for default): " model_num
if [[ -z "$model_num" ]]; then
echo -e "${YELLOW}Using default model: $(basename "$KOBOLD_MODEL")${NC}"
else
selected_model=$(ls -1 /mnt/data/ai/llm/*.gguf | sed -n "${model_num}p")
if [[ -n "$selected_model" ]]; then
KOBOLD_MODEL="$selected_model"
echo -e "${GREEN}Selected model: $(basename "$KOBOLD_MODEL")${NC}"
else
echo -e "${RED}Invalid selection. Using default model.${NC}"
fi
fi
}
# Function to start SillyTavern
start_sillytavern() {
echo -e "${YELLOW}Starting SillyTavern in screen session '${SILLYTAVERN_SCREEN}'...${NC}"
screen -dmS "$SILLYTAVERN_SCREEN" bash -c "sillytavern --listen 0.0.0.0"
sleep 2
if check_screen "$SILLYTAVERN_SCREEN"; then
echo -e "${GREEN}✓ SillyTavern started successfully!${NC}"
echo -e "${BLUE} Access at: http://0.0.0.0:8000${NC}"
else
echo -e "${RED}✗ Failed to start SillyTavern${NC}"
fi
}
# Function to start KoboldCPP
start_koboldcpp() {
select_model
echo -e "${YELLOW}Starting KoboldCPP in screen session '${KOBOLD_SCREEN}'...${NC}"
screen -dmS "$KOBOLD_SCREEN" bash -c "cd /mnt/data/ai/llm && ./koboldcpp-linux-x64 --model '$KOBOLD_MODEL' --host 0.0.0.0 --port 5001 --contextsize 8192 --gpulayers 999"
sleep 2
if check_screen "$KOBOLD_SCREEN"; then
echo -e "${GREEN}✓ KoboldCPP started successfully!${NC}"
echo -e "${BLUE} Model: $(basename "$KOBOLD_MODEL")${NC}"
echo -e "${BLUE} Access at: http://0.0.0.0:5001${NC}"
else
echo -e "${RED}✗ Failed to start KoboldCPP${NC}"
fi
}
# Function to stop a service
stop_service() {
local service=$1
local screen_name=$2
echo -e "${YELLOW}Stopping ${service}...${NC}"
screen -S "$screen_name" -X quit
sleep 1
if ! check_screen "$screen_name"; then
echo -e "${GREEN}✓ ${service} stopped successfully${NC}"
else
echo -e "${RED}✗ Failed to stop ${service}${NC}"
fi
}
# Function to show service status
show_status() {
echo -e "${CYAN}╔═══════════════════════════════════════╗${NC}"
echo -e "${CYAN}║ Service Status Overview ║${NC}"
echo -e "${CYAN}╚═══════════════════════════════════════╝${NC}"
echo
local st_running=false
local kc_running=false
# Check SillyTavern
if check_screen "$SILLYTAVERN_SCREEN"; then
st_running=true
echo -e " ${GREEN}●${NC} SillyTavern: ${GREEN}Running${NC} (screen: ${SILLYTAVERN_SCREEN})"
echo -e " ${BLUE}→ http://0.0.0.0:8000${NC}"
else
echo -e " ${RED}●${NC} SillyTavern: ${RED}Not running${NC}"
fi
echo
# Check KoboldCPP
if check_screen "$KOBOLD_SCREEN"; then
kc_running=true
echo -e " ${GREEN}●${NC} KoboldCPP: ${GREEN}Running${NC} (screen: ${KOBOLD_SCREEN})"
echo -e " ${BLUE}→ http://0.0.0.0:5001${NC}"
else
echo -e " ${RED}●${NC} KoboldCPP: ${RED}Not running${NC}"
fi
echo
}
# Function to handle service management
manage_services() {
local st_running=$(check_screen "$SILLYTAVERN_SCREEN" && echo "true" || echo "false")
local kc_running=$(check_screen "$KOBOLD_SCREEN" && echo "true" || echo "false")
# If both services are running
if [[ "$st_running" == "true" ]] && [[ "$kc_running" == "true" ]]; then
echo -e "${GREEN}Both services are running.${NC}"
echo
echo "1) Attach to SillyTavern"
echo "2) Attach to KoboldCPP"
echo "3) Restart SillyTavern"
echo "4) Restart KoboldCPP"
echo "5) Stop all services"
echo "6) Exit"
read -p "Your choice (1-6): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to SillyTavern... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$SILLYTAVERN_SCREEN"
;;
2)
echo -e "${BLUE}Attaching to KoboldCPP... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$KOBOLD_SCREEN"
;;
3)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
echo
start_sillytavern
;;
4)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
echo
start_koboldcpp
;;
5)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
;;
6)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If only SillyTavern is running
elif [[ "$st_running" == "true" ]]; then
echo -e "${YELLOW}Only SillyTavern is running.${NC}"
echo
echo "1) Attach to SillyTavern"
echo "2) Start KoboldCPP"
echo "3) Restart SillyTavern"
echo "4) Stop SillyTavern"
echo "5) Exit"
read -p "Your choice (1-5): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to SillyTavern... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$SILLYTAVERN_SCREEN"
;;
2)
start_koboldcpp
;;
3)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
echo
start_sillytavern
;;
4)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
;;
5)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If only KoboldCPP is running
elif [[ "$kc_running" == "true" ]]; then
echo -e "${YELLOW}Only KoboldCPP is running.${NC}"
echo
echo "1) Attach to KoboldCPP"
echo "2) Start SillyTavern"
echo "3) Restart KoboldCPP"
echo "4) Stop KoboldCPP"
echo "5) Exit"
read -p "Your choice (1-5): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to KoboldCPP... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$KOBOLD_SCREEN"
;;
2)
start_sillytavern
;;
3)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
echo
start_koboldcpp
;;
4)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
;;
5)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If no services are running
else
echo -e "${YELLOW}No services are running.${NC}"
echo
echo "1) Start both services"
echo "2) Start SillyTavern only"
echo "3) Start KoboldCPP only"
echo "4) Exit"
read -p "Your choice (1-4): " choice
case $choice in
1)
start_sillytavern
echo
start_koboldcpp
;;
2)
start_sillytavern
;;
3)
start_koboldcpp
;;
4)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
fi
}
# Main script
echo -e "${BLUE}╔═══════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ LLM Services Manager ║${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════╝${NC}"
echo
# Show status
show_status
# Show separator and manage services
echo -e "${CYAN}═══════════════════════════════════════${NC}"
manage_services
echo
echo -e "${BLUE}Quick reference:${NC}"
echo "• List sessions: screen -ls"
echo "• Attach: screen -r <name>"
echo "• Detach: Ctrl+A then D"

View File

@@ -0,0 +1,119 @@
# SSH Utility - Smart SSH Connection Manager
A transparent SSH wrapper that automatically chooses between local and remote connections based on network connectivity.
## What it does
This utility acts as a drop-in replacement for the `ssh` command that intelligently routes connections:
- When you type `ssh desktop`, it automatically checks if your local network is available
- If local: connects via `desktop-local` (faster local connection)
- If remote: connects via `desktop` (Tailscale/VPN connection)
- All other SSH usage passes through unchanged (`ssh --help`, `ssh user@host`, etc.)
## Installation
The utility is automatically compiled and installed to `~/.local/bin/ssh` via Ansible when you run your dotfiles setup.
## Configuration
1. Copy the example config:
```bash
mkdir -p ~/.config/ssh-util
cp ~/.dotfiles/config/ssh-util/config.yaml ~/.config/ssh-util/
```
2. Edit `~/.config/ssh-util/config.yaml` to match your setup:
```yaml
smart_aliases:
desktop:
primary: "desktop-local" # SSH config entry for local connection
fallback: "desktop" # SSH config entry for remote connection
check_host: "192.168.86.22" # IP to ping for connectivity test
timeout: "2s" # Ping timeout
```
3. Ensure your `~/.ssh/config` contains the referenced host entries:
```
Host desktop
HostName mennos-desktop
User menno
Port 400
ForwardAgent yes
AddKeysToAgent yes
Host desktop-local
HostName 192.168.86.22
User menno
Port 400
ForwardAgent yes
AddKeysToAgent yes
```
## Usage
Once configured, simply use SSH as normal:
```bash
# Smart connection - automatically chooses local vs remote
ssh desktop
# All other SSH usage works exactly the same
ssh --help
ssh --version
ssh user@example.com
ssh -L 8080:localhost:80 server
```
## How it works
1. When you run `ssh <alias>`, the utility checks if `<alias>` is defined in the smart_aliases config
2. If yes, it pings the `check_host` IP address
3. If ping succeeds: executes `ssh <primary>` instead
4. If ping fails: executes `ssh <fallback>` instead
5. If not a smart alias: passes through to real SSH unchanged
## Troubleshooting
### SSH utility not found
Make sure `~/.local/bin` is in your PATH:
```bash
echo $PATH | grep -o ~/.local/bin
```
### Config not loading
Check the config file exists and has correct syntax:
```bash
ls -la ~/.config/ssh-util/config.yaml
cat ~/.config/ssh-util/config.yaml
```
### Connectivity test failing
Test manually:
```bash
ping -c 1 -W 2 192.168.86.22
```
### Falls back to real SSH
If there are any errors loading config or parsing, the utility safely falls back to executing the real SSH binary at `/usr/bin/ssh`.
## Adding more aliases
To add more smart aliases, just extend the config:
```yaml
smart_aliases:
desktop:
primary: "desktop-local"
fallback: "desktop"
check_host: "192.168.86.22"
timeout: "2s"
server:
primary: "server-local"
fallback: "server-remote"
check_host: "192.168.1.100"
timeout: "1s"
```
Remember to create the corresponding entries in your `~/.ssh/config`.

View File

@@ -0,0 +1,102 @@
# SSH Utility Configuration
# This file defines smart aliases that automatically choose between local and remote connections
# Logging configuration
logging:
enabled: true
# Levels: debug, info, warn, error
level: "info"
# Formats: console, json
format: "console"
smart_aliases:
desktop:
primary: "desktop-local"
fallback: "desktop"
check_host: "192.168.1.250"
timeout: "2s"
server:
primary: "server-local"
fallback: "server"
check_host: "192.168.1.254"
timeout: "2s"
laptop:
primary: "laptop-local"
fallback: "laptop"
check_host: "192.168.1.253"
timeout: "2s"
rtlsdr:
primary: "rtlsdr-local"
fallback: "rtlsdr"
check_host: "192.168.1.252"
timeout: "2s"
# Background SSH Tunnel Definitions
tunnels:
# Example: Desktop database tunnel
desktop-database:
type: local
local_port: 5432
remote_host: database
remote_port: 5432
ssh_host: desktop # Uses smart alias logic (desktop-local/desktop)
# Example: Development API tunnel
dev-api:
type: local
local_port: 8080
remote_host: api
remote_port: 80
ssh_host: dev-server
# Example: SOCKS proxy tunnel
socks-proxy:
type: dynamic
local_port: 1080
ssh_host: bastion
# Modem web interface tunnel
modem-web:
type: local
local_port: 8443
remote_host: 192.168.1.1
remote_port: 443
ssh_host: desktop
# Tunnel Management Commands:
# ssh --tunnel --open desktop-database (or ssh -TO desktop-database)
# ssh --tunnel --close desktop-database (or ssh -TC desktop-database)
# ssh --tunnel --list (or ssh -TL)
#
# Ad-hoc tunnels (not in config):
# ssh -TO temp-api --local 8080:api:80 --via server
# Logging options:
# - enabled: true/false - whether to show any logs
# - level: debug (verbose), info (normal), warn (warnings only), error (errors only)
# - format: console (human readable), json (structured)
# Logs are written to stderr so they don't interfere with SSH output
# How it works:
# 1. When you run: ssh desktop
# 2. The utility pings 192.168.86.22 with a 2s timeout
# 3. If ping succeeds: runs "ssh desktop-local" instead
# 4. If ping fails: runs "ssh desktop" instead
# 5. All other SSH usage (flags, user@host, etc.) passes through unchanged
# Your SSH config should contain the actual host definitions:
# Host desktop
# HostName mennos-desktop
# User menno
# Port 400
# ForwardAgent yes
# AddKeysToAgent yes
#
# Host desktop-local
# HostName 192.168.86.22
# User menno
# Port 400
# ForwardAgent yes
# AddKeysToAgent yes

View File

@@ -0,0 +1,20 @@
module ssh-util
go 1.21
require (
github.com/jedib0t/go-pretty/v6 v6.4.9
github.com/rs/zerolog v1.31.0
github.com/spf13/cobra v1.8.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/sys v0.12.0 // indirect
)

View File

@@ -0,0 +1,46 @@
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jedib0t/go-pretty/v6 v6.4.9 h1:vZ6bjGg2eBSrJn365qlxGcaWu09Id+LHtrfDWlB2Usc=
github.com/jedib0t/go-pretty/v6 v6.4.9/go.mod h1:Ndk3ase2CkQbXLLNf5QDHoYb6J9WtVfmHZu9n8rk2xs=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.31.0 h1:FcTR3NnLWW+NnTwwhFWiJSZr4ECLpqCm6QsEnyvbV4A=
github.com/rs/zerolog v1.31.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0=
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.4 h1:wZRexSlwd7ZXfKINDLsO4r7WBt3gTKONc6K/VesHvHM=
github.com/stretchr/testify v1.7.4/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,93 @@
---
- name: Borg Backup Installation and Configuration
block:
- name: Check if Borg is already installed
ansible.builtin.command: which borg
register: borg_check
ignore_errors: true
changed_when: false
- name: Ensure Borg is installed
ansible.builtin.package:
name: borg
state: present
become: true
when: borg_check.rc != 0
- name: Set Borg backup facts
ansible.builtin.set_fact:
borg_passphrase: "{{ lookup('community.general.onepassword', 'Borg Backup', vault='Dotfiles', field='password') }}"
borg_config_dir: "{{ ansible_env.HOME }}/.config/borg"
borg_backup_dir: "/mnt/services"
borg_repo_dir: "/mnt/object_storage/borg-repo"
- name: Create Borg directories
ansible.builtin.file:
path: "{{ borg_dir }}"
state: directory
mode: "0755"
loop:
- "{{ borg_config_dir }}"
- "/mnt/object_storage"
loop_control:
loop_var: borg_dir
become: true
- name: Check if Borg repository exists
ansible.builtin.stat:
path: "{{ borg_repo_dir }}/config"
register: borg_repo_check
become: true
- name: Initialize Borg repository
ansible.builtin.command: >
borg init --encryption=repokey {{ borg_repo_dir }}
environment:
BORG_PASSPHRASE: "{{ borg_passphrase }}"
become: true
when: not borg_repo_check.stat.exists
- name: Create Borg backup script
ansible.builtin.template:
src: templates/borg-backup.sh.j2
dest: "{{ borg_config_dir }}/backup.sh"
mode: "0755"
become: true
- name: Create Borg systemd service
ansible.builtin.template:
src: templates/borg-backup.service.j2
dest: /etc/systemd/system/borg-backup.service
mode: "0644"
become: true
register: borg_service
- name: Create Borg systemd timer
ansible.builtin.template:
src: templates/borg-backup.timer.j2
dest: /etc/systemd/system/borg-backup.timer
mode: "0644"
become: true
register: borg_timer
- name: Reload systemd daemon
ansible.builtin.systemd:
daemon_reload: true
become: true
when: borg_service.changed or borg_timer.changed
- name: Enable and start Borg backup timer
ansible.builtin.systemd:
name: borg-backup.timer
enabled: true
state: started
become: true
- name: Display Borg backup status
ansible.builtin.debug:
msg: "Borg backup is configured and will run daily at 2 AM. Logs available at /var/log/borg-backup.log"
tags:
- borg-backup
- borg
- backup

View File

@@ -0,0 +1,95 @@
---
- name: Borg Local Sync Installation and Configuration
block:
- name: Set Borg backup facts
ansible.builtin.set_fact:
borg_passphrase: "{{ lookup('community.general.onepassword', 'Borg Backup', vault='Dotfiles', field='password') }}"
borg_config_dir: "{{ ansible_env.HOME }}/.config/borg"
borg_backup_dir: "/mnt/services"
borg_repo_dir: "/mnt/object_storage/borg-repo"
- name: Create Borg local sync script
template:
src: borg-local-sync.sh.j2
dest: /usr/local/bin/borg-local-sync.sh
mode: "0755"
owner: root
group: root
become: yes
tags:
- borg-local-sync
- name: Create Borg local sync systemd service
template:
src: borg-local-sync.service.j2
dest: /etc/systemd/system/borg-local-sync.service
mode: "0644"
owner: root
group: root
become: yes
notify:
- reload systemd
tags:
- borg-local-sync
- name: Create Borg local sync systemd timer
template:
src: borg-local-sync.timer.j2
dest: /etc/systemd/system/borg-local-sync.timer
mode: "0644"
owner: root
group: root
become: yes
notify:
- reload systemd
- restart borg-local-sync-timer
tags:
- borg-local-sync
- name: Create log file for Borg local sync
file:
path: /var/log/borg-local-sync.log
state: touch
owner: root
group: root
mode: "0644"
become: yes
tags:
- borg-local-sync
- name: Enable and start Borg local sync timer
systemd:
name: borg-local-sync.timer
enabled: yes
state: started
daemon_reload: yes
become: yes
tags:
- borg-local-sync
- name: Add logrotate configuration for Borg local sync
copy:
content: |
/var/log/borg-local-sync.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 644 root root
}
dest: /etc/logrotate.d/borg-local-sync
mode: "0644"
owner: root
group: root
become: yes
tags:
- borg-local-sync
- borg
- backup
tags:
- borg-local-sync
- borg
- backup

View File

@@ -0,0 +1,88 @@
---
- name: Dynamic DNS setup
block:
- name: Create systemd environment file for dynamic DNS
ansible.builtin.template:
src: "{{ playbook_dir }}/templates/dynamic-dns-systemd.env.j2"
dest: "/etc/dynamic-dns-systemd.env"
mode: "0600"
owner: root
group: root
become: true
- name: Create dynamic DNS wrapper script
ansible.builtin.copy:
dest: "/usr/local/bin/dynamic-dns-update.sh"
mode: "0755"
content: |
#!/bin/bash
# Run dynamic DNS update (binary compiled by utils.yml)
{{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl,sathub.de,sathub.nl" 2>&1 | logger -t dynamic-dns
become: true
- name: Create dynamic DNS systemd timer
ansible.builtin.copy:
dest: "/etc/systemd/system/dynamic-dns.timer"
mode: "0644"
content: |
[Unit]
Description=Dynamic DNS Update Timer
Requires=dynamic-dns.service
[Timer]
OnCalendar=*:0/15
Persistent=true
[Install]
WantedBy=timers.target
become: true
register: ddns_timer
- name: Create dynamic DNS systemd service
ansible.builtin.copy:
dest: "/etc/systemd/system/dynamic-dns.service"
mode: "0644"
content: |
[Unit]
Description=Dynamic DNS Update
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/dynamic-dns-update.sh
EnvironmentFile=/etc/dynamic-dns-systemd.env
User={{ ansible_user }}
Group={{ ansible_user }}
[Install]
WantedBy=multi-user.target
become: true
register: ddns_service
- name: Reload systemd daemon
ansible.builtin.systemd:
daemon_reload: true
become: true
when: ddns_timer.changed or ddns_service.changed
- name: Enable and start dynamic DNS timer
ansible.builtin.systemd:
name: dynamic-dns.timer
enabled: true
state: started
become: true
- name: Display setup completion message
ansible.builtin.debug:
msg: |
Dynamic DNS setup complete!
- Systemd timer: sudo systemctl status dynamic-dns.timer
- Check logs: sudo journalctl -u dynamic-dns.service -f
- Manual run: sudo /usr/local/bin/dynamic-dns-update.sh
- Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl
when: inventory_hostname == 'mennos-server' or inventory_hostname == 'mennos-vps'
tags:
- dynamic-dns

View File

@@ -70,7 +70,7 @@
- name: Include JuiceFS Redis tasks - name: Include JuiceFS Redis tasks
ansible.builtin.include_tasks: services/redis/redis.yml ansible.builtin.include_tasks: services/redis/redis.yml
when: inventory_hostname == 'mennos-cloud-server' when: inventory_hostname == 'mennos-server'
- name: Enable and start JuiceFS service - name: Enable and start JuiceFS service
ansible.builtin.systemd: ansible.builtin.systemd:

View File

@@ -0,0 +1,165 @@
---
- name: Server setup
block:
- name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package:
name: openssh
state: present
when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package:
name: openssh-server
state: present
when: ansible_pkg_mgr != 'pacman'
- name: Ensure Borg is installed on Arch-based systems
ansible.builtin.package:
name: borg
state: present
become: true
when: ansible_pkg_mgr == 'pacman'
- name: Ensure Borg is installed on Debian/Ubuntu systems
ansible.builtin.package:
name: borgbackup
state: present
become: true
when: ansible_pkg_mgr != 'pacman'
- name: Include JuiceFS tasks
ansible.builtin.include_tasks: juicefs.yml
tags:
- juicefs
- name: Include Dynamic DNS tasks
ansible.builtin.include_tasks: dynamic-dns.yml
tags:
- dynamic-dns
- name: Include Borg Backup tasks
ansible.builtin.include_tasks: borg-backup.yml
tags:
- borg-backup
- name: Include Borg Local Sync tasks
ansible.builtin.include_tasks: borg-local-sync.yml
tags:
- borg-local-sync
- name: System performance optimizations
ansible.posix.sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: true
become: true
loop:
- { name: "fs.file-max", value: "2097152" } # Max open files for the entire system
- { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have
- { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory
- { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches
- { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket
- { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input
- { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state
- { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections
- name: Include service tasks
ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml"
loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}"
loop_control:
label: "{{ item.name }}"
tags:
- services
- always
vars:
services:
- name: dashy
enabled: true
hosts:
- mennos-server
- name: gitea
enabled: true
hosts:
- mennos-server
- name: factorio
enabled: true
hosts:
- mennos-server
- name: dozzle
enabled: true
hosts:
- mennos-server
- name: beszel
enabled: true
hosts:
- mennos-server
- name: caddy
enabled: true
hosts:
- mennos-server
- name: golink
enabled: true
hosts:
- mennos-server
- name: immich
enabled: true
hosts:
- mennos-server
- name: plex
enabled: true
hosts:
- mennos-server
- name: tautulli
enabled: true
hosts:
- mennos-server
- name: downloaders
enabled: true
hosts:
- mennos-server
- name: wireguard
enabled: true
hosts:
- mennos-server
- name: nextcloud
enabled: true
hosts:
- mennos-server
- name: cloudreve
enabled: true
hosts:
- mennos-server
- name: echoip
enabled: true
hosts:
- mennos-server
- name: arr-stack
enabled: true
hosts:
- mennos-server
- name: home-assistant
enabled: true
hosts:
- mennos-server
- name: privatebin
enabled: true
hosts:
- mennos-server
- name: unifi-network-application
enabled: true
hosts:
- mennos-server
- name: avorion
enabled: false
hosts:
- mennos-server
- name: sathub
enabled: true
hosts:
- mennos-server
- name: necesse
enabled: true
hosts:
- mennos-server

View File

@@ -3,8 +3,8 @@
block: block:
- name: Set ArrStack directories - name: Set ArrStack directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
arr_stack_service_dir: "{{ ansible_env.HOME }}/services/arr-stack" arr_stack_service_dir: "{{ ansible_env.HOME }}/.services/arr-stack"
arr_stack_data_dir: "/mnt/object_storage/services/arr-stack" arr_stack_data_dir: "/mnt/services/arr-stack"
- name: Create ArrStack directory - name: Create ArrStack directory
ansible.builtin.file: ansible.builtin.file:
@@ -35,3 +35,4 @@
tags: tags:
- services - services
- arr_stack - arr_stack
- arr-stack

View File

@@ -13,10 +13,14 @@ services:
- host.docker.internal:host-gateway - host.docker.internal:host-gateway
volumes: volumes:
- {{ arr_stack_data_dir }}/radarr-config:/config - {{ arr_stack_data_dir }}/radarr-config:/config
- /mnt/object_storage:/storage - /mnt/data:/mnt/data
restart: "unless-stopped" restart: "unless-stopped"
networks: networks:
- arr_stack_net - arr_stack_net
deploy:
resources:
limits:
memory: 2G
sonarr: sonarr:
image: linuxserver/sonarr:latest image: linuxserver/sonarr:latest
@@ -27,7 +31,7 @@ services:
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
volumes: volumes:
- {{ arr_stack_data_dir }}/sonarr-config:/config - {{ arr_stack_data_dir }}/sonarr-config:/config
- /mnt/object_storage:/storage - /mnt/data:/mnt/data
ports: ports:
- 8989:8989 - 8989:8989
extra_hosts: extra_hosts:
@@ -35,23 +39,32 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- arr_stack_net - arr_stack_net
deploy:
resources:
limits:
memory: 2G
whisparr: bazarr:
image: ghcr.io/hotio/whisparr:latest image: ghcr.io/hotio/bazarr:latest
container_name: bazarr
environment: environment:
- PUID=1000 - PUID=1000
- PGID=100 - PGID=100
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
ports: ports:
- 8686:8686 - 6767:6767
extra_hosts: extra_hosts:
- host.docker.internal:host-gateway - host.docker.internal:host-gateway
volumes: volumes:
- {{ arr_stack_data_dir }}/whisparr-config:/config - {{ arr_stack_data_dir }}/bazarr-config:/config
- /mnt/object_storage:/storage - /mnt/data:/mnt/data
restart: unless-stopped restart: unless-stopped
networks: networks:
- arr_stack_net - arr_stack_net
deploy:
resources:
limits:
memory: 512M
prowlarr: prowlarr:
container_name: prowlarr container_name: prowlarr
@@ -69,6 +82,10 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- arr_stack_net - arr_stack_net
deploy:
resources:
limits:
memory: 512M
flaresolverr: flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest image: ghcr.io/flaresolverr/flaresolverr:latest
@@ -85,16 +102,19 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- arr_stack_net - arr_stack_net
deploy:
resources:
limits:
memory: 1G
jellyseerr: overseerr:
image: fallenbagel/jellyseerr image: sctx/overseerr:latest
container_name: jellyseerr
environment: environment:
- PUID=1000 - PUID=1000
- PGID=100 - PGID=100
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
volumes: volumes:
- {{ arr_stack_data_dir }}/jellyseerr-config:/app/config - {{ arr_stack_data_dir }}/overseerr-config:/app/config
ports: ports:
- 5055:5055 - 5055:5055
extra_hosts: extra_hosts:
@@ -103,10 +123,60 @@ services:
networks: networks:
- arr_stack_net - arr_stack_net
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 512M
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
- internalNode=true
- inContainer=true
- ffmpegVersion=7
- nodeName=MyInternalNode
- auth=false
- openBrowser=true
- maxLogSizeMB=10
- cronPluginUpdate=
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- {{ arr_stack_data_dir }}/tdarr-server:/app/server
- {{ arr_stack_data_dir }}/tdarr-config:/app/configs
- {{ arr_stack_data_dir }}/tdarr-logs:/app/logs
- /mnt/data:/media
- {{ arr_stack_data_dir }}/tdarr-cache:/temp
ports:
- 8265:8265
- 8266:8266
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
runtime: nvidia
devices:
- /dev/dri:/dev/dri
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 4G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
networks: networks:
arr_stack_net: arr_stack_net:
name: arr_stack_net
caddy_network: caddy_network:
external: true external: true
name: caddy_default name: caddy_default

View File

@@ -0,0 +1,37 @@
---
- name: Deploy Avorion service
block:
- name: Set Avorion directories
ansible.builtin.set_fact:
avorion_service_dir: "{{ ansible_env.HOME }}/.services/avorion"
avorion_data_dir: "/mnt/services/avorion"
- name: Create Avorion directory
ansible.builtin.file:
path: "{{ avorion_service_dir }}"
state: directory
mode: "0755"
- name: Create Avorion data directory
ansible.builtin.file:
path: "{{ avorion_data_dir }}"
state: directory
mode: "0755"
- name: Deploy Avorion docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ avorion_service_dir }}/docker-compose.yml"
mode: "0644"
register: avorion_compose
- name: Stop Avorion service
ansible.builtin.command: docker compose -f "{{ avorion_service_dir }}/docker-compose.yml" down --remove-orphans
when: avorion_compose.changed
- name: Start Avorion service
ansible.builtin.command: docker compose -f "{{ avorion_service_dir }}/docker-compose.yml" up -d
when: avorion_compose.changed
tags:
- services
- avorion

View File

@@ -0,0 +1,15 @@
services:
avorion:
image: rfvgyhn/avorion:latest
volumes:
- {{ avorion_data_dir }}:/home/steam/.avorion/galaxies/avorion_galaxy
ports:
- 27000:27000
- 27000:27000/udp
- 27003:27003/udp
- 27020:27020/udp
- 27021:27021/udp
deploy:
resources:
limits:
memory: 4G

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set Beszel directories - name: Set Beszel directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
beszel_service_dir: "{{ ansible_env.HOME }}/services/beszel" beszel_service_dir: "{{ ansible_env.HOME }}/.services/beszel"
beszel_data_dir: "/mnt/services/beszel" beszel_data_dir: "/mnt/services/beszel"
- name: Create Beszel directory - name: Create Beszel directory

View File

@@ -10,6 +10,10 @@ services:
networks: networks:
- beszel-net - beszel-net
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 256M
beszel-agent: beszel-agent:
image: henrygd/beszel-agent:latest image: henrygd/beszel-agent:latest
@@ -21,6 +25,10 @@ services:
environment: environment:
LISTEN: /beszel_socket/beszel.sock LISTEN: /beszel_socket/beszel.sock
KEY: 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKkSIQDh1vS8lG+2Uw/9dK1eOgCHVCgQfP+Bfk4XPkdn' KEY: 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKkSIQDh1vS8lG+2Uw/9dK1eOgCHVCgQfP+Bfk4XPkdn'
deploy:
resources:
limits:
memory: 128M
networks: networks:
beszel-net: beszel-net:

View File

@@ -0,0 +1,354 @@
# Global configuration for country blocking
{
servers {
protocols h1 h2 h3
}
}
# Country allow list snippet using MaxMind GeoLocation - reusable across all sites
{% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %}
(country_allow) {
@allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ allowed_countries_codes | join(' ') }}
}
}
}
respond @not_allowed_countries "Access denied" 403
}
{% else %}
(country_allow) {
# Country allow list disabled
}
{% endif %}
# European country allow list - allows all European countries only
{% if eu_countries_codes | default([]) | length > 0 %}
(eu_country_allow) {
@eu_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@eu_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ eu_countries_codes | join(' ') }}
}
}
}
respond @eu_not_allowed_countries "Access denied" 403
}
{% else %}
(eu_country_allow) {
# EU country allow list disabled
}
{% endif %}
# Trusted country allow list - allows US, Australia, New Zealand, and Japan
{% if trusted_countries_codes | default([]) | length > 0 %}
(trusted_country_allow) {
@trusted_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@trusted_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ trusted_countries_codes | join(' ') }}
}
}
}
respond @trusted_not_allowed_countries "Access denied" 403
}
{% else %}
(trusted_country_allow) {
# Trusted country allow list disabled
}
{% endif %}
# Sathub country allow list - combines EU and trusted countries
{% if eu_countries_codes | default([]) | length > 0 and trusted_countries_codes | default([]) | length > 0 %}
(sathub_country_allow) {
@sathub_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@sathub_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ (eu_countries_codes + trusted_countries_codes) | join(' ') }}
}
}
}
respond @sathub_not_allowed_countries "Access denied" 403
}
{% else %}
(sathub_country_allow) {
# Sathub country allow list disabled
}
{% endif %}
{% if inventory_hostname == 'mennos-server' %}
git.mvl.sh {
import country_allow
reverse_proxy gitea:3000
tls {{ caddy_email }}
}
git.vleeuwen.me {
import country_allow
redir https://git.mvl.sh{uri}
tls {{ caddy_email }}
}
df.mvl.sh {
import country_allow
redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh
tls {{ caddy_email }}
}
fsm.mvl.sh {
import country_allow
reverse_proxy factorio-server-manager:80
tls {{ caddy_email }}
}
fsm.vleeuwen.me {
import country_allow
redir https://fsm.mvl.sh{uri}
tls {{ caddy_email }}
}
beszel.mvl.sh {
import country_allow
reverse_proxy beszel:8090
tls {{ caddy_email }}
}
beszel.vleeuwen.me {
import country_allow
redir https://beszel.mvl.sh{uri}
tls {{ caddy_email }}
}
sathub.de {
import sathub_country_allow
handle {
reverse_proxy sathub-frontend:4173
}
# Enable compression
encode gzip
# Security headers
header {
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Strict-Transport-Security "max-age=31536000; includeSubDomains"
}
tls {{ caddy_email }}
}
api.sathub.de {
import sathub_country_allow
reverse_proxy sathub-backend:4001
tls {{ caddy_email }}
}
sathub.nl {
import sathub_country_allow
redir https://sathub.de{uri}
tls {{ caddy_email }}
}
photos.mvl.sh {
import country_allow
reverse_proxy immich:2283
tls {{ caddy_email }}
}
photos.vleeuwen.me {
import country_allow
redir https://photos.mvl.sh{uri}
tls {{ caddy_email }}
}
home.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
home.vleeuwen.me {
import country_allow
reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
unifi.mvl.sh {
reverse_proxy unifi-controller:8443 {
transport http {
tls_insecure_skip_verify
}
header_up Host {host}
}
tls {{ caddy_email }}
}
hotspot.mvl.sh {
reverse_proxy unifi-controller:8843 {
transport http {
tls_insecure_skip_verify
}
header_up Host {host}
}
tls {{ caddy_email }}
}
hotspot.mvl.sh:80 {
redir https://hotspot.mvl.sh{uri} permanent
}
bin.mvl.sh {
import country_allow
reverse_proxy privatebin:8080
tls {{ caddy_email }}
}
ip.mvl.sh ip.vleeuwen.me {
import country_allow
reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
http://ip.mvl.sh http://ip.vleeuwen.me {
import country_allow
reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host}
}
}
overseerr.mvl.sh {
import country_allow
reverse_proxy overseerr:5055
tls {{ caddy_email }}
}
overseerr.vleeuwen.me {
import country_allow
redir https://overseerr.mvl.sh{uri}
tls {{ caddy_email }}
}
plex.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:32400 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
plex.vleeuwen.me {
import country_allow
redir https://plex.mvl.sh{uri}
tls {{ caddy_email }}
}
tautulli.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:8181 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
tautulli.vleeuwen.me {
import country_allow
redir https://tautulli.mvl.sh{uri}
tls {{ caddy_email }}
}
cloud.mvl.sh {
import country_allow
reverse_proxy cloudreve:5212 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
cloud.vleeuwen.me {
import country_allow
redir https://cloud.mvl.sh{uri}
tls {{ caddy_email }}
}
collabora.mvl.sh {
import country_allow
reverse_proxy collabora:9980 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
drive.mvl.sh drive.vleeuwen.me {
import country_allow
# CalDAV and CardDAV redirects
redir /.well-known/carddav /remote.php/dav/ 301
redir /.well-known/caldav /remote.php/dav/ 301
# Handle other .well-known requests
handle /.well-known/* {
reverse_proxy nextcloud:80 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
}
# Main reverse proxy configuration with proper headers
reverse_proxy nextcloud:80 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
# Security headers
header {
# HSTS header for enhanced security (required by Nextcloud)
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Additional security headers recommended for Nextcloud
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "no-referrer"
X-XSS-Protection "1; mode=block"
X-Permitted-Cross-Domain-Policies "none"
X-Robots-Tag "noindex, nofollow"
}
tls {{ caddy_email }}
}
{% endif %}

View File

@@ -0,0 +1,59 @@
---
- name: Deploy Caddy service
block:
- name: Set Caddy directories
ansible.builtin.set_fact:
caddy_service_dir: "{{ ansible_env.HOME }}/.services/caddy"
caddy_data_dir: "/mnt/services/caddy"
geoip_db_path: "/mnt/services/echoip"
caddy_email: "{{ lookup('community.general.onepassword', 'Caddy (Proxy)', vault='Dotfiles', field='email') }}"
- name: Create Caddy directory
ansible.builtin.file:
path: "{{ caddy_service_dir }}"
state: directory
mode: "0755"
- name: Setup country blocking
ansible.builtin.include_tasks: country-blocking.yml
- name: Copy Dockerfile for custom Caddy build
ansible.builtin.copy:
src: Dockerfile
dest: "{{ caddy_service_dir }}/Dockerfile"
mode: "0644"
register: caddy_dockerfile
- name: Create Caddy network
ansible.builtin.command: docker network create caddy_default
register: create_caddy_network
failed_when:
- create_caddy_network.rc != 0
- "'already exists' not in create_caddy_network.stderr"
changed_when: create_caddy_network.rc == 0
- name: Deploy Caddy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ caddy_service_dir }}/docker-compose.yml"
mode: "0644"
register: caddy_compose
- name: Deploy Caddy Caddyfile
ansible.builtin.template:
src: Caddyfile.j2
dest: "{{ caddy_service_dir }}/Caddyfile"
mode: "0644"
register: caddy_file
- name: Stop Caddy service
ansible.builtin.command: docker compose -f "{{ caddy_service_dir }}/docker-compose.yml" down --remove-orphans
when: caddy_compose.changed or caddy_file.changed
- name: Start Caddy service
ansible.builtin.command: docker compose -f "{{ caddy_service_dir }}/docker-compose.yml" up -d
when: caddy_compose.changed or caddy_file.changed
tags:
- caddy
- services
- reverse-proxy

View File

@@ -21,6 +21,10 @@ services:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
networks: networks:
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 512M
networks: networks:
caddy_network: caddy_network:

View File

@@ -0,0 +1,32 @@
- name: Deploy Cloudreve service
tags:
- services
- cloudreve
block:
- name: Set Cloudreve directories
ansible.builtin.set_fact:
cloudreve_service_dir: "{{ ansible_env.HOME }}/.services/cloudreve"
cloudreve_data_dir: "/mnt/services/cloudreve"
- name: Create Cloudreve directory
ansible.builtin.file:
path: "{{ cloudreve_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Cloudreve docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ cloudreve_service_dir }}/docker-compose.yml"
mode: "0644"
register: cloudreve_compose
- name: Stop Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
when: cloudreve_compose.changed
- name: Start Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" up -d
changed_when: false
when: cloudreve_compose.changed

View File

@@ -0,0 +1,67 @@
services:
cloudreve:
image: cloudreve/cloudreve:latest
depends_on:
- postgresql
- redis
restart: always
ports:
- 5212:5212
networks:
- caddy_network
- cloudreve
environment:
- CR_CONF_Database.Type=postgres
- CR_CONF_Database.Host=postgresql
- CR_CONF_Database.User=cloudreve
- CR_CONF_Database.Name=cloudreve
- CR_CONF_Database.Port=5432
- CR_CONF_Redis.Server=redis:6379
volumes:
- {{ cloudreve_data_dir }}/data:/cloudreve/data
postgresql:
image: postgres:17
environment:
- POSTGRES_USER=cloudreve
- POSTGRES_DB=cloudreve
- POSTGRES_HOST_AUTH_METHOD=trust
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/postgres:/var/lib/postgresql/data
collabora:
image: collabora/code
restart: unless-stopped
ports:
- 9980:9980
environment:
- domain=collabora\\.mvl\\.sh
- username=admin
- password=Dt3hgIJOPr3rgh
- dictionaries=en_US
- TZ=Europe/Amsterdam
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
networks:
- cloudreve
- caddy_network
deploy:
resources:
limits:
memory: 1G
redis:
image: redis:latest
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/redis:/data
networks:
cloudreve:
name: cloudreve
driver: bridge
caddy_network:
name: caddy_default
external: true

View File

@@ -0,0 +1,308 @@
pageInfo:
title: Menno's Home
navLinks: []
sections:
- name: Selfhosted
items:
- title: Plex
icon: http://mennos-server:4000/assets/plex.svg
url: https://plex.mvl.sh
statusCheckUrl: https://plex.mvl.sh/identity
statusCheck: true
id: 0_1035_plex
- title: Tautulli
icon: http://mennos-server:4000/assets/tautulli.svg
url: https://tautulli.mvl.sh
id: 1_1035_tautulli
statusCheck: true
- title: Overseerr
icon: http://mennos-server:4000/assets/overseerr.svg
url: https://overseerr.mvl.sh
id: 2_1035_overseerr
statusCheck: true
- title: Immich
icon: http://mennos-server:4000/assets/immich.svg
url: https://photos.mvl.sh
id: 3_1035_immich
statusCheck: true
- title: Nextcloud
icon: http://mennos-server:4000/assets/nextcloud.svg
url: https://drive.mvl.sh
id: 3_1035_nxtcld
statusCheck: true
- title: ComfyUI
icon: http://mennos-server:8188/assets/favicon.ico
url: http://mennos-server:8188
statusCheckUrl: http://host.docker.internal:8188/api/system_stats
id: 3_1035_comfyui
statusCheck: true
displayData:
sortBy: default
rows: 1
cols: 2
collapsed: false
hideForGuests: false
- name: Media Management
items:
- title: Sonarr
icon: http://mennos-server:4000/assets/sonarr.svg
url: http://go/sonarr
id: 0_1533_sonarr
- title: Radarr
icon: http://mennos-server:4000/assets/radarr.svg
url: http://go/radarr
id: 1_1533_radarr
- title: Prowlarr
icon: http://mennos-server:4000/assets/prowlarr.svg
url: http://go/prowlarr
id: 2_1533_prowlarr
- title: Tdarr
icon: http://mennos-server:4000/assets/tdarr.png
url: http://go/tdarr
id: 3_1533_tdarr
- name: Kagi
items:
- title: Kagi Search
icon: favicon
url: https://kagi.com/
id: 0_380_kagisearch
- title: Kagi Translate
icon: favicon
url: https://translate.kagi.com/
id: 1_380_kagitranslate
- title: Kagi Assistant
icon: favicon
url: https://kagi.com/assistant
id: 2_380_kagiassistant
- name: News
items:
- title: Nu.nl
icon: http://mennos-server:4000/assets/nunl.svg
url: https://www.nu.nl/
id: 0_380_nu
- title: Tweakers.net
icon: favicon
url: https://www.tweakers.net/
id: 1_380_tweakers
- title: NL Times
icon: favicon
url: https://www.nltimes.nl/
id: 2_380_nl_times
- name: Downloaders
items:
- title: qBittorrent
icon: http://mennos-server:4000/assets/qbittorrent.svg
url: http://go/qbit
id: 0_1154_qbittorrent
tags:
- download
- torrent
- yarr
- title: Sabnzbd
icon: http://mennos-server:4000/assets/sabnzbd.svg
url: http://go/sabnzbd
id: 1_1154_sabnzbd
tags:
- download
- nzb
- yarr
- name: Git
items:
- title: GitHub
icon: http://mennos-server:4000/assets/github.svg
url: https://github.com/vleeuwenmenno
id: 0_292_github
tags:
- repo
- git
- hub
- title: Gitea
icon: http://mennos-server:4000/assets/gitea.svg
url: http://git.mvl.sh/vleeuwenmenno
id: 1_292_gitea
tags:
- repo
- git
- tea
- name: Server Monitoring
items:
- title: Beszel
icon: http://mennos-server:4000/assets/beszel.svg
url: http://go/beszel
tags:
- monitoring
- logs
id: 0_1725_beszel
- title: Dozzle
icon: http://mennos-server:4000/assets/dozzle.svg
url: http://go/dozzle
id: 1_1725_dozzle
tags:
- monitoring
- logs
- title: UpDown.io Status
icon: far fa-signal
url: http://go/status
id: 2_1725_updowniostatus
tags:
- monitoring
- logs
- name: Tools
items:
- title: Home Assistant
icon: http://mennos-server:4000/assets/home-assistant.svg
url: http://go/homeassistant
id: 0_529_homeassistant
- title: Tailscale
icon: http://mennos-server:4000/assets/tailscale.svg
url: http://go/tailscale
id: 1_529_tailscale
- title: GliNet KVM
icon: http://mennos-server:4000/assets/glinet.svg
url: http://go/glkvm
id: 2_529_glinetkvm
- title: Unifi Network Controller
icon: http://mennos-server:4000/assets/unifi.svg
url: http://go/unifi
id: 3_529_unifinetworkcontroller
- title: Dashboard Icons
icon: favicon
url: https://dashboardicons.com/
id: 4_529_dashboardicons
- name: Weather
items:
- title: Buienradar
icon: favicon
url: https://www.buienradar.nl/weer/Beverwijk/NL/2758998
id: 0_529_buienradar
- title: ClearOutside
icon: favicon
url: https://clearoutside.com/forecast/52.49/4.66
id: 1_529_clearoutside
- title: Windy
icon: favicon
url: https://www.windy.com/
id: 2_529_windy
- title: Meteoblue
icon: favicon
url: https://www.meteoblue.com/en/country/weather/radar/the-netherlands_the-netherlands_2750405
id: 2_529_meteoblue
- name: DiscountOffice
displayData:
sortBy: default
rows: 1
cols: 3
collapsed: false
hideForGuests: false
items:
- title: DiscountOffice.nl
icon: favicon
url: https://discountoffice.nl/
id: 0_1429_discountofficenl
tags:
- do
- discount
- work
- title: DiscountOffice.be
icon: favicon
url: https://discountoffice.be/
id: 1_1429_discountofficebe
tags:
- do
- discount
- work
- title: Admin NL
icon: favicon
url: https://discountoffice.nl/administrator
id: 2_1429_adminnl
tags:
- do
- discount
- work
- title: Admin BE
icon: favicon
url: https://discountoffice.be/administrator
id: 3_1429_adminbe
tags:
- do
- discount
- work
- title: Subsites
icon: favicon
url: https://elastomappen.nl
id: 4_1429_subsites
tags:
- do
- discount
- work
- title: Proxmox
icon: http://mennos-server:4000/assets/proxmox.svg
url: https://www.transip.nl/cp/vps/prm/350680/
id: 5_1429_proxmox
tags:
- do
- discount
- work
- title: Transip
icon: favicon
url: https://www.transip.nl/cp/vps/prm/350680/
id: 6_1429_transip
tags:
- do
- discount
- work
- title: Kibana
icon: http://mennos-server:4000/assets/kibana.svg
url: http://go/kibana
id: 7_1429_kibana
tags:
- do
- discount
- work
appConfig:
layout: auto
iconSize: large
theme: nord
startingView: default
defaultOpeningMethod: sametab
statusCheck: false
statusCheckInterval: 0
routingMode: history
enableMultiTasking: false
widgetsAlwaysUseProxy: false
webSearch:
disableWebSearch: false
searchEngine: https://kagi.com/search?q=
openingMethod: newtab
searchBangs: {}
enableFontAwesome: true
enableMaterialDesignIcons: false
hideComponents:
hideHeading: false
hideNav: true
hideSearch: false
hideSettings: true
hideFooter: false
auth:
enableGuestAccess: false
users: []
enableOidc: false
oidc:
adminRole: "false"
adminGroup: "false"
enableHeaderAuth: false
headerAuth:
userHeader: REMOTE_USER
proxyWhitelist: []
enableKeycloak: false
showSplashScreen: false
preventWriteToDisk: false
preventLocalSave: false
disableConfiguration: false
disableConfigurationForNonAdmin: false
allowConfigEdit: true
enableServiceWorker: false
disableContextMenu: false
disableUpdateChecks: false
disableSmartSort: false
enableErrorReporting: false

View File

@@ -0,0 +1,44 @@
---
- name: Deploy Dashy service
block:
- name: Set Dashy directories
ansible.builtin.set_fact:
dashy_service_dir: "{{ ansible_env.HOME }}/.services/dashy"
dashy_data_dir: "/mnt/services/dashy"
- name: Create Dashy directory
ansible.builtin.file:
path: "{{ dashy_service_dir }}"
state: directory
mode: "0755"
- name: Create Dashy data directory
ansible.builtin.file:
path: "{{ dashy_data_dir }}"
state: directory
mode: "0755"
- name: Deploy Dashy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ dashy_service_dir }}/docker-compose.yml"
mode: "0644"
register: dashy_compose
- name: Deploy Dashy config.yml
ansible.builtin.template:
src: conf.yml.j2
dest: "{{ dashy_data_dir }}/conf.yml"
mode: "0644"
register: dashy_config
- name: Stop Dashy service
ansible.builtin.command: docker compose -f "{{ dashy_service_dir }}/docker-compose.yml" down --remove-orphans
when: dashy_compose.changed
- name: Start Dashy service
ansible.builtin.command: docker compose -f "{{ dashy_service_dir }}/docker-compose.yml" up -d
when: dashy_compose.changed
tags:
- services
- dashy

View File

@@ -0,0 +1,21 @@
services:
dashy:
image: lissy93/dashy:latest
restart: unless-stopped
ports:
- 4000:8080
volumes:
- {{dashy_data_dir}}/:/app/user-data
networks:
- caddy_network
extra_hosts:
- host.docker.internal:host-gateway
deploy:
resources:
limits:
memory: 2G
networks:
caddy_network:
external: true
name: caddy_default

View File

@@ -11,7 +11,6 @@ services:
- 6881:6881 - 6881:6881
- 6881:6881/udp - 6881:6881/udp
- 8085:8085 # Qbittorrent - 8085:8085 # Qbittorrent
- 7788:8080 # Sabnzbd
devices: devices:
- /dev/net/tun:/dev/net/tun - /dev/net/tun:/dev/net/tun
volumes: volumes:
@@ -24,6 +23,10 @@ services:
- OPENVPN_PASSWORD={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='OPENVPN_PASSWORD') }} - OPENVPN_PASSWORD={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='OPENVPN_PASSWORD') }}
- SERVER_COUNTRIES={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='SERVER_COUNTRIES') }} - SERVER_COUNTRIES={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='SERVER_COUNTRIES') }}
restart: always restart: always
deploy:
resources:
limits:
memory: 512M
sabnzbd: sabnzbd:
image: lscr.io/linuxserver/sabnzbd:latest image: lscr.io/linuxserver/sabnzbd:latest
@@ -33,13 +36,14 @@ services:
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
volumes: volumes:
- {{ downloaders_data_dir }}/sabnzbd-config:/config - {{ downloaders_data_dir }}/sabnzbd-config:/config
- {{ object_storage_dir }}:/storage - {{ local_data_dir }}:{{ local_data_dir }}
- {{ local_data_dir }}:/local
restart: unless-stopped restart: unless-stopped
network_mode: "service:gluetun" ports:
depends_on: - 7788:8080
gluetun: deploy:
condition: service_healthy resources:
limits:
memory: 1G
qbittorrent: qbittorrent:
image: lscr.io/linuxserver/qbittorrent image: lscr.io/linuxserver/qbittorrent
@@ -51,12 +55,15 @@ services:
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
volumes: volumes:
- {{ downloaders_data_dir }}/qbit-config:/config - {{ downloaders_data_dir }}/qbit-config:/config
- {{ object_storage_dir }}:/storage - {{ local_data_dir }}:{{ local_data_dir }}
- {{ local_data_dir }}:/local
depends_on: depends_on:
gluetun: gluetun:
condition: service_healthy condition: service_healthy
restart: always restart: always
deploy:
resources:
limits:
memory: 1G
networks: networks:
arr_stack_net: arr_stack_net:

View File

@@ -3,9 +3,8 @@
block: block:
- name: Set Downloaders directories - name: Set Downloaders directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
object_storage_dir: "/mnt/object_storage"
local_data_dir: "/mnt/data" local_data_dir: "/mnt/data"
downloaders_service_dir: "{{ ansible_env.HOME }}/services/downloaders" downloaders_service_dir: "{{ ansible_env.HOME }}/.services/downloaders"
downloaders_data_dir: "/mnt/services/downloaders" downloaders_data_dir: "/mnt/services/downloaders"
- name: Create Downloaders directory - name: Create Downloaders directory

View File

@@ -4,13 +4,17 @@ services:
volumes: volumes:
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
ports: ports:
- 8686:8080 - 8800:8080
environment: environment:
- DOZZLE_NO_ANALYTICS=true - DOZZLE_NO_ANALYTICS=true
restart: unless-stopped restart: unless-stopped
networks: networks:
- dozzle-net - dozzle-net
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 256M
networks: networks:
dozzle-net: dozzle-net:

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set Dozzle directories - name: Set Dozzle directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
dozzle_service_dir: "{{ ansible_env.HOME }}/services/dozzle" dozzle_service_dir: "{{ ansible_env.HOME }}/.services/dozzle"
dozzle_data_dir: "/mnt/services/dozzle" dozzle_data_dir: "/mnt/services/dozzle"
- name: Create Dozzle directory - name: Create Dozzle directory

View File

@@ -16,6 +16,10 @@ services:
-a /opt/echoip/GeoLite2-ASN.mmdb -a /opt/echoip/GeoLite2-ASN.mmdb
-c /opt/echoip/GeoLite2-City.mmdb -c /opt/echoip/GeoLite2-City.mmdb
-f /opt/echoip/GeoLite2-Country.mmdb -f /opt/echoip/GeoLite2-Country.mmdb
deploy:
resources:
limits:
memory: 128M
networks: networks:
caddy_network: caddy_network:

View File

@@ -3,11 +3,13 @@
block: block:
- name: Set EchoIP directories - name: Set EchoIP directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
echoip_service_dir: "{{ ansible_env.HOME }}/services/echoip" echoip_service_dir: "{{ ansible_env.HOME }}/.services/echoip"
echoip_data_dir: "/mnt/services/echoip" echoip_data_dir: "/mnt/services/echoip"
maxmind_account_id: "{{ lookup('community.general.onepassword', 'MaxMind', maxmind_account_id:
"{{ lookup('community.general.onepassword', 'MaxMind',
vault='Dotfiles', field='account_id') | regex_replace('\\s+', '') }}" vault='Dotfiles', field='account_id') | regex_replace('\\s+', '') }}"
maxmind_license_key: "{{ lookup('community.general.onepassword', 'MaxMind', maxmind_license_key:
"{{ lookup('community.general.onepassword', 'MaxMind',
vault='Dotfiles', field='license_key') | regex_replace('\\s+', '') }}" vault='Dotfiles', field='license_key') | regex_replace('\\s+', '') }}"
# Requires: gather_facts: true in playbook # Requires: gather_facts: true in playbook

View File

@@ -19,6 +19,10 @@ services:
networks: networks:
- factorio - factorio
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 2G
networks: networks:
factorio: factorio:

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set Factorio directories - name: Set Factorio directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
factorio_service_dir: "{{ ansible_env.HOME }}/services/factorio" factorio_service_dir: "{{ ansible_env.HOME }}/.services/factorio"
factorio_data_dir: "/mnt/services/factorio" factorio_data_dir: "/mnt/services/factorio"
- name: Create Factorio directory - name: Create Factorio directory

View File

@@ -15,6 +15,10 @@ services:
networks: networks:
- gitea - gitea
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 1G
postgres: postgres:
image: postgres:15-alpine image: postgres:15-alpine
@@ -29,6 +33,10 @@ services:
- {{gitea_data_dir}}/postgres:/var/lib/postgresql/data - {{gitea_data_dir}}/postgres:/var/lib/postgresql/data
networks: networks:
- gitea - gitea
deploy:
resources:
limits:
memory: 1G
act_runner: act_runner:
image: gitea/act_runner:latest image: gitea/act_runner:latest
@@ -46,6 +54,10 @@ services:
restart: always restart: always
networks: networks:
- gitea - gitea
deploy:
resources:
limits:
memory: 2G
networks: networks:
gitea: gitea:

View File

@@ -4,7 +4,7 @@
- name: Set Gitea directories - name: Set Gitea directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
gitea_data_dir: "/mnt/services/gitea" gitea_data_dir: "/mnt/services/gitea"
gitea_service_dir: "{{ ansible_env.HOME }}/services/gitea" gitea_service_dir: "{{ ansible_env.HOME }}/.services/gitea"
- name: Create Gitea directories - name: Create Gitea directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -8,3 +8,7 @@ services:
volumes: volumes:
- {{ golink_data_dir }}:/home/nonroot - {{ golink_data_dir }}:/home/nonroot
restart: "unless-stopped" restart: "unless-stopped"
deploy:
resources:
limits:
memory: 256M

View File

@@ -4,7 +4,7 @@
- name: Set GoLink directories - name: Set GoLink directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
golink_data_dir: "/mnt/services/golink" golink_data_dir: "/mnt/services/golink"
golink_service_dir: "{{ ansible_env.HOME }}/services/golink" golink_service_dir: "{{ ansible_env.HOME }}/.services/golink"
- name: Create GoLink directories - name: Create GoLink directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -15,3 +15,7 @@ services:
network_mode: host network_mode: host
devices: devices:
- /dev/ttyUSB0:/dev/ttyUSB0 - /dev/ttyUSB0:/dev/ttyUSB0
deploy:
resources:
limits:
memory: 2G

View File

@@ -4,7 +4,7 @@
- name: Set Home Assistant directories - name: Set Home Assistant directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
homeassistant_data_dir: "/mnt/services/homeassistant" homeassistant_data_dir: "/mnt/services/homeassistant"
homeassistant_service_dir: "{{ ansible_env.HOME }}/services/homeassistant" homeassistant_service_dir: "{{ ansible_env.HOME }}/.services/homeassistant"
- name: Create Home Assistant directories - name: Create Home Assistant directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -26,6 +26,8 @@ services:
runtime: nvidia runtime: nvidia
deploy: deploy:
resources: resources:
limits:
memory: 4G
reservations: reservations:
devices: devices:
- driver: nvidia - driver: nvidia
@@ -49,6 +51,8 @@ services:
runtime: nvidia runtime: nvidia
deploy: deploy:
resources: resources:
limits:
memory: 8G
reservations: reservations:
devices: devices:
- driver: nvidia - driver: nvidia
@@ -63,6 +67,10 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- immich - immich
deploy:
resources:
limits:
memory: 1G
database: database:
container_name: immich_postgres container_name: immich_postgres
@@ -100,6 +108,10 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- immich - immich
deploy:
resources:
limits:
memory: 2G
volumes: volumes:
model-cache: model-cache:

View File

@@ -5,7 +5,7 @@
ansible.builtin.set_fact: ansible.builtin.set_fact:
immich_data_dir: "/mnt/data/photos/immich-library" immich_data_dir: "/mnt/data/photos/immich-library"
immich_database_dir: "/mnt/services/immich/postgres" immich_database_dir: "/mnt/services/immich/postgres"
immich_service_dir: "{{ ansible_env.HOME }}/services/immich" immich_service_dir: "{{ ansible_env.HOME }}/.services/immich"
- name: Create Immich directories - name: Create Immich directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -0,0 +1,15 @@
services:
necesse:
image: brammys/necesse-server
container_name: necesse
restart: unless-stopped
ports:
- "14159:14159/udp"
environment:
- MOTD=StarDebris' Server!
- PASSWORD=2142
- SLOTS=4
- PAUSE=1
volumes:
- {{ necesse_data_dir }}/saves:/necesse/saves
- {{ necesse_data_dir }}/logs:/necesse/logs

View File

@@ -0,0 +1,41 @@
---
- name: Deploy Necesse service
block:
- name: Set Necesse directories
ansible.builtin.set_fact:
necesse_service_dir: "{{ ansible_env.HOME }}/.services/necesse"
necesse_data_dir: "/mnt/services/necesse"
- name: Create Necesse service directory
ansible.builtin.file:
path: "{{ necesse_service_dir }}"
state: directory
mode: "0755"
- name: Create Necesse data directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ necesse_data_dir }}"
- "{{ necesse_data_dir }}/saves"
- "{{ necesse_data_dir }}/logs"
- name: Deploy Necesse docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ necesse_service_dir }}/docker-compose.yml"
mode: "0644"
register: necesse_compose
- name: Stop Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" down --remove-orphans
when: necesse_compose.changed
- name: Start Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" up -d
when: necesse_compose.changed
tags:
- services
- necesse

View File

@@ -25,6 +25,10 @@ services:
- MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }} - MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }}
- MYSQL_HOST=nextclouddb - MYSQL_HOST=nextclouddb
- REDIS_HOST=redis - REDIS_HOST=redis
deploy:
resources:
limits:
memory: 2G
nextclouddb: nextclouddb:
image: mariadb:11.4.7 image: mariadb:11.4.7
@@ -43,6 +47,10 @@ services:
- MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }} - MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }}
- MYSQL_DATABASE=nextcloud - MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud - MYSQL_USER=nextcloud
deploy:
resources:
limits:
memory: 1G
redis: redis:
image: redis:alpine image: redis:alpine
@@ -51,6 +59,10 @@ services:
- {{ nextcloud_data_dir }}/redis:/data - {{ nextcloud_data_dir }}/redis:/data
networks: networks:
- nextcloud - nextcloud
deploy:
resources:
limits:
memory: 512M
networks: networks:
nextcloud: nextcloud:

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set Nextcloud directories - name: Set Nextcloud directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
nextcloud_service_dir: "{{ ansible_env.HOME }}/services/nextcloud" nextcloud_service_dir: "{{ ansible_env.HOME }}/.services/nextcloud"
nextcloud_data_dir: "/mnt/services/nextcloud" nextcloud_data_dir: "/mnt/services/nextcloud"
- name: Create Nextcloud directory - name: Create Nextcloud directory

View File

@@ -14,11 +14,14 @@ services:
volumes: volumes:
- {{ plex_data_dir }}/config:/config - {{ plex_data_dir }}/config:/config
- {{ plex_data_dir }}/transcode:/transcode - {{ plex_data_dir }}/transcode:/transcode
- {{ '/mnt/data/movies' }}:/movies - /mnt/data/movies:/movies
- {{ '/mnt/data/tvshows' }}:/tvshows - /mnt/data/tvshows:/tvshows
- {{ '/mnt/data/music' }}:/music - /mnt/object_storage/tvshows:/tvshows_slow
- /mnt/data/music:/music
deploy: deploy:
resources: resources:
limits:
memory: 4G
reservations: reservations:
devices: devices:
- driver: nvidia - driver: nvidia

View File

@@ -4,7 +4,7 @@
- name: Set Plex directories - name: Set Plex directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
plex_data_dir: "/mnt/services/plex" plex_data_dir: "/mnt/services/plex"
plex_service_dir: "{{ ansible_env.HOME }}/services/plex" plex_service_dir: "{{ ansible_env.HOME }}/.services/plex"
- name: Create Plex directories - name: Create Plex directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -22,6 +22,10 @@ services:
start_period: 90s start_period: 90s
networks: networks:
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 256M
networks: networks:
caddy_network: caddy_network:

View File

@@ -4,7 +4,7 @@
- name: Set PrivateBin directories - name: Set PrivateBin directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
privatebin_data_dir: "/mnt/services/privatebin" privatebin_data_dir: "/mnt/services/privatebin"
privatebin_service_dir: "{{ ansible_env.HOME }}/services/privatebin" privatebin_service_dir: "{{ ansible_env.HOME }}/.services/privatebin"
- name: Create PrivateBin directories - name: Create PrivateBin directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -0,0 +1,17 @@
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
volumes:
- /mnt/services/qdrant:/qdrant/storage
deploy:
resources:
limits:
memory: 2G

View File

@@ -0,0 +1,32 @@
- name: Deploy Qdrant service
tags:
- services
- qdrant
block:
- name: Set Qdrant directories
ansible.builtin.set_fact:
qdrant_service_dir: "{{ ansible_env.HOME }}/.services/qdrant"
qdrant_data_dir: "/mnt/services/qdrant"
- name: Create Qdrant directory
ansible.builtin.file:
path: "{{ qdrant_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Qdrant docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ qdrant_service_dir }}/docker-compose.yml"
mode: "0644"
notify: restart_qdrant
- name: Stop Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
listen: restart_qdrant
- name: Start Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" up -d
changed_when: false
listen: restart_qdrant

View File

@@ -5,7 +5,7 @@ services:
ports: ports:
- "6379:6379" - "6379:6379"
volumes: volumes:
- /mnt/services/redis-data:/data - /mnt/services/redis:/data
command: ["redis-server", "--appendonly", "yes", "--requirepass", "{{ REDIS_PASSWORD }}"] command: ["redis-server", "--appendonly", "yes", "--requirepass", "{{ REDIS_PASSWORD }}"]
environment: environment:
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
@@ -17,6 +17,10 @@ services:
start_period: 5s start_period: 5s
networks: networks:
- juicefs-network - juicefs-network
deploy:
resources:
limits:
memory: 256M
networks: networks:
juicefs-network: juicefs-network:

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set Redis facts - name: Set Redis facts
ansible.builtin.set_fact: ansible.builtin.set_fact:
redis_service_dir: "{{ ansible_env.HOME }}/services/juicefs-redis" redis_service_dir: "{{ ansible_env.HOME }}/.services/juicefs-redis"
redis_password: "{{ lookup('community.general.onepassword', 'JuiceFS (Redis)', vault='Dotfiles', field='password') }}" redis_password: "{{ lookup('community.general.onepassword', 'JuiceFS (Redis)', vault='Dotfiles', field='password') }}"
- name: Create Redis service directory - name: Create Redis service directory
@@ -34,6 +34,7 @@
register: juicefs_stop register: juicefs_stop
changed_when: juicefs_stop.changed changed_when: juicefs_stop.changed
when: redis_compose.changed and juicefs_service_stat.stat.exists when: redis_compose.changed and juicefs_service_stat.stat.exists
become: true
- name: List containers that are running - name: List containers that are running
ansible.builtin.command: docker ps -q ansible.builtin.command: docker ps -q
@@ -68,6 +69,7 @@
register: juicefs_start register: juicefs_start
changed_when: juicefs_start.changed changed_when: juicefs_start.changed
when: juicefs_service_stat.stat.exists when: juicefs_service_stat.stat.exists
become: true
- name: Restart containers that were stopped - name: Restart containers that were stopped
ansible.builtin.command: docker start {{ item }} ansible.builtin.command: docker start {{ item }}
@@ -76,5 +78,5 @@
changed_when: docker_restart.rc == 0 changed_when: docker_restart.rc == 0
when: redis_compose.changed when: redis_compose.changed
tags: tags:
- services - services
- redis - redis

View File

@@ -0,0 +1,53 @@
# Production Environment Variables
# Copy this to .env and fill in your values
# Database configuration (PostgreSQL)
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USER=sathub
DB_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DB_PASSWORD') }}
DB_NAME=sathub
# Required: JWT secret for token signing
JWT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='JWT_SECRET') }}
# Required: Two-factor authentication encryption key
TWO_FA_ENCRYPTION_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='TWO_FA_ENCRYPTION_KEY') }}
# Email configuration (required for password resets)
SMTP_HOST={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_HOST') }}
SMTP_PORT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PORT') }}
SMTP_USERNAME={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_USERNAME') }}
SMTP_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PASSWORD') }}
SMTP_FROM_EMAIL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_FROM_EMAIL') }}
# MinIO Object Storage configuration
MINIO_ROOT_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_ROOT_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# Basically the same as the above
MINIO_ACCESS_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_SECRET_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# GitHub credentials for Watchtower (auto-updates)
GITHUB_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
GITHUB_PAT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
REPO_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
REPO_PASS={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
# Optional: Override defaults if needed
# GIN_MODE=release (set automatically)
FRONTEND_URL=https://sathub.de
# CORS configuration (optional - additional allowed origins)
CORS_ALLOWED_ORIGINS=https://sathub.de,https://sathub.nl,https://api.sathub.de
# Frontend configuration (optional - defaults are provided)
VITE_API_BASE_URL=https://api.sathub.de
VITE_ALLOWED_HOSTS=sathub.de,sathub.nl
# Discord related messsaging
DISCORD_CLIENT_ID={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_ID') }}
DISCORD_CLIENT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_SECRET') }}
DISCORD_REDIRECT_URI={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_REDIRECT_URL') }}
DISCORD_WEBHOOK_URL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_WEBHOOK_URL') }}

View File

@@ -0,0 +1,182 @@
services:
# Migration service - runs once on stack startup
migrate:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-migrate
restart: "no"
command: ["./main", "auto-migrate"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
networks:
- sathub
depends_on:
- postgres
backend:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-backend
restart: unless-stopped
command: ["./main", "api"]
environment:
- GIN_MODE=release
- FRONTEND_URL=${FRONTEND_URL:-https://sathub.de}
- CORS_ALLOWED_ORIGINS=${CORS_ALLOWED_ORIGINS:-https://sathub.de}
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# Security settings
- JWT_SECRET=${JWT_SECRET}
- TWO_FA_ENCRYPTION_KEY=${TWO_FA_ENCRYPTION_KEY}
# SMTP settings
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
- caddy_network
depends_on:
migrate:
condition: service_completed_successfully
worker:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-worker
restart: unless-stopped
command: ["./main", "worker"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# SMTP settings (needed for notifications)
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
depends_on:
migrate:
condition: service_completed_successfully
postgres:
image: postgres:15-alpine
container_name: sathub-postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER:-sathub}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-sathub}
volumes:
- {{ sathub_data_dir }}/postgres_data:/var/lib/postgresql/data
networks:
- sathub
frontend:
image: ghcr.io/vleeuwenmenno/sathub-frontend/frontend:latest
container_name: sathub-frontend
restart: unless-stopped
environment:
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-https://api.sathub.de}
- VITE_ALLOWED_HOSTS=${VITE_ALLOWED_HOSTS:-sathub.de,sathub.nl}
networks:
- sathub
- caddy_network
minio:
image: minio/minio
container_name: sathub-minio
restart: unless-stopped
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
volumes:
- {{ sathub_data_dir }}/minio_data:/data
command: server /data --console-address :9001
networks:
- sathub
depends_on:
- postgres
watchtower:
image: containrrr/watchtower:latest
container_name: sathub-watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_STOPPED=false
- REPO_USER=${REPO_USER}
- REPO_PASS=${REPO_PASS}
command: --interval 30 --cleanup --include-stopped=false sathub-backend sathub-worker sathub-frontend
networks:
- sathub
networks:
sathub:
driver: bridge
# We assume you're running a Caddy instance in a separate compose file with this network
# If not, you can remove this network and the related depends_on in the services above
# But the stack is designed to run behind a Caddy reverse proxy for SSL termination and routing
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,50 @@
---
- name: Deploy SatHub service
block:
- name: Set SatHub directories
ansible.builtin.set_fact:
sathub_service_dir: "{{ ansible_env.HOME }}/.services/sathub"
sathub_data_dir: "/mnt/services/sathub"
- name: Set SatHub frontend configuration
ansible.builtin.set_fact:
frontend_api_base_url: "https://api.sathub.de"
frontend_allowed_hosts: "sathub.de,sathub.nl"
cors_allowed_origins: "https://sathub.nl,https://api.sathub.de,https://obj.sathub.de"
- name: Create SatHub directory
ansible.builtin.file:
path: "{{ sathub_service_dir }}"
state: directory
mode: "0755"
- name: Create SatHub data directory
ansible.builtin.file:
path: "{{ sathub_data_dir }}"
state: directory
mode: "0755"
- name: Deploy SatHub .env
ansible.builtin.template:
src: .env.j2
dest: "{{ sathub_service_dir }}/.env"
mode: "0644"
register: sathub_env
- name: Deploy SatHub docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ sathub_service_dir }}/docker-compose.yml"
mode: "0644"
register: sathub_compose
- name: Stop SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" down --remove-orphans
when: sathub_compose.changed or sathub_env.changed
- name: Start SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" up -d
when: sathub_compose.changed or sathub_env.changed
tags:
- services
- sathub

View File

@@ -7,7 +7,7 @@
- name: Check service directories existence for disabled services - name: Check service directories existence for disabled services
ansible.builtin.stat: ansible.builtin.stat:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}" path: "{{ ansible_env.HOME }}/.services/{{ item.name }}"
register: service_dir_results register: service_dir_results
loop: "{{ services_to_cleanup }}" loop: "{{ services_to_cleanup }}"
loop_control: loop_control:
@@ -19,14 +19,14 @@
- name: Check if docker-compose file exists for services to cleanup - name: Check if docker-compose file exists for services to cleanup
ansible.builtin.stat: ansible.builtin.stat:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}/docker-compose.yml" path: "{{ ansible_env.HOME }}/.services/{{ item.name }}/docker-compose.yml"
register: compose_file_results register: compose_file_results
loop: "{{ services_with_dirs }}" loop: "{{ services_with_dirs }}"
loop_control: loop_control:
label: "{{ item.name }}" label: "{{ item.name }}"
- name: Stop disabled services with docker-compose files - name: Stop disabled services with docker-compose files
ansible.builtin.command: docker compose -f "{{ ansible_env.HOME }}/services/{{ item.item.name }}/docker-compose.yml" down --remove-orphans ansible.builtin.command: docker compose -f "{{ ansible_env.HOME }}/.services/{{ item.item.name }}/docker-compose.yml" down --remove-orphans
loop: "{{ compose_file_results.results | selectattr('stat.exists', 'equalto', true) }}" loop: "{{ compose_file_results.results | selectattr('stat.exists', 'equalto', true) }}"
loop_control: loop_control:
label: "{{ item.item.name }}" label: "{{ item.item.name }}"
@@ -36,7 +36,7 @@
- name: Remove service directories for disabled services - name: Remove service directories for disabled services
ansible.builtin.file: ansible.builtin.file:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}" path: "{{ ansible_env.HOME }}/.services/{{ item.name }}"
state: absent state: absent
loop: "{{ services_with_dirs }}" loop: "{{ services_with_dirs }}"
loop_control: loop_control:

View File

@@ -30,6 +30,10 @@ services:
- {{ stash_config_dir }}/generated:/generated - {{ stash_config_dir }}/generated:/generated
networks: networks:
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 2G
networks: networks:
caddy_network: caddy_network:

View File

@@ -14,6 +14,10 @@ services:
restart: unless-stopped restart: unless-stopped
networks: networks:
- caddy_network - caddy_network
deploy:
resources:
limits:
memory: 512M
networks: networks:
caddy_network: caddy_network:

View File

@@ -4,7 +4,7 @@
- name: Set Tautulli directories - name: Set Tautulli directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
tautulli_data_dir: "{{ '/mnt/services/tautulli' }}" tautulli_data_dir: "{{ '/mnt/services/tautulli' }}"
tautulli_service_dir: "{{ ansible_env.HOME }}/services/tautulli" tautulli_service_dir: "{{ ansible_env.HOME }}/.services/tautulli"
- name: Create Tautulli directories - name: Create Tautulli directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -29,6 +29,10 @@ services:
- caddy_network - caddy_network
sysctls: sysctls:
- net.ipv6.conf.all.disable_ipv6=1 - net.ipv6.conf.all.disable_ipv6=1
deploy:
resources:
limits:
memory: 1G
unifi-db: unifi-db:
image: mongo:6.0 image: mongo:6.0
@@ -48,6 +52,10 @@ services:
- unifi-network - unifi-network
sysctls: sysctls:
- net.ipv6.conf.all.disable_ipv6=1 - net.ipv6.conf.all.disable_ipv6=1
deploy:
resources:
limits:
memory: 1G
networks: networks:
unifi-network: unifi-network:

View File

@@ -4,7 +4,7 @@
- name: Set Unifi Network App directories - name: Set Unifi Network App directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
unifi_network_application_data_dir: "/mnt/services/unifi_network_application" unifi_network_application_data_dir: "/mnt/services/unifi_network_application"
unifi_network_application_service_dir: "{{ ansible_env.HOME }}/services/unifi_network_application" unifi_network_application_service_dir: "{{ ansible_env.HOME }}/.services/unifi_network_application"
- name: Create Unifi Network App directories - name: Create Unifi Network App directories
ansible.builtin.file: ansible.builtin.file:

View File

@@ -17,3 +17,7 @@ services:
sysctls: sysctls:
- net.ipv4.conf.all.src_valid_mark=1 - net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped restart: unless-stopped
deploy:
resources:
limits:
memory: 512M

View File

@@ -3,7 +3,7 @@
block: block:
- name: Set WireGuard directories - name: Set WireGuard directories
ansible.builtin.set_fact: ansible.builtin.set_fact:
wireguard_service_dir: "{{ ansible_env.HOME }}/services/wireguard" wireguard_service_dir: "{{ ansible_env.HOME }}/.services/wireguard"
wireguard_data_dir: "/mnt/services/wireguard" wireguard_data_dir: "/mnt/services/wireguard"
- name: Create WireGuard directory - name: Create WireGuard directory

View File

@@ -0,0 +1,51 @@
---
- name: Process 1Password custom allowed browsers
block:
- name: Check if 1Password is installed
ansible.builtin.command: 1password --version
register: onepassword_check
changed_when: false
failed_when: false
- name: Check if 1Password is running anywhere
ansible.builtin.command: pgrep 1password
register: onepassword_running
changed_when: false
failed_when: false
- name: Ensure 1Password custom allowed browsers directory exists
ansible.builtin.file:
path: /etc/1password
state: directory
mode: "0755"
become: true
- name: Add Browsers to 1Password custom allowed browsers
ansible.builtin.copy:
content: |
ZenBrowser
zen-browser
app.zen_browser.zen
zen
Firefox
firefox
opera
zen-x86_64
dest: /etc/1password/custom_allowed_browsers
owner: root
group: root
mode: "0755"
become: true
register: custom_browsers_file
- name: Kill any running 1Password instances if configuration changed
ansible.builtin.command: pkill 1password
when: custom_browsers_file.changed and onepassword_running.stdout != ""
changed_when: custom_browsers_file.changed and onepassword_running.stdout != ""
- name: If 1Password was killed, restart it...
ansible.builtin.command: screen -dmS 1password 1password
when: custom_browsers_file.changed and onepassword_running.stdout != ""
changed_when: custom_browsers_file.changed and onepassword_running.stdout != ""
tags:
- custom_allowed_browsers

View File

@@ -31,11 +31,6 @@
- name: Define system desired Flatpaks - name: Define system desired Flatpaks
ansible.builtin.set_fact: ansible.builtin.set_fact:
desired_system_flatpaks: desired_system_flatpaks:
# GNOME Software
- "{{ 'org.gnome.Extensions' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Weather' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Sudoku' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
# Games # Games
- io.github.openhv.OpenHV - io.github.openhv.OpenHV
- info.beyondallreason.bar - info.beyondallreason.bar
@@ -46,14 +41,20 @@
# Multimedia # Multimedia
- com.plexamp.Plexamp - com.plexamp.Plexamp
- tv.plex.PlexDesktop - tv.plex.PlexDesktop
- com.spotify.Client
# Messaging # Messaging
- com.rtosta.zapzap
- org.telegram.desktop - org.telegram.desktop
- org.signal.Signal - org.signal.Signal
- com.rtosta.zapzap - com.discordapp.Discord
- io.github.equicord.equibop
# 3D Printing
- com.bambulab.BambuStudio
- io.mango3d.LycheeSlicer
# Utilities # Utilities
- com.fastmail.Fastmail
- com.ranfdev.DistroShelf - com.ranfdev.DistroShelf
- io.missioncenter.MissionCenter - io.missioncenter.MissionCenter
- io.gitlab.elescoute.spacelaunch - io.gitlab.elescoute.spacelaunch
@@ -61,7 +62,6 @@
- com.usebottles.bottles - com.usebottles.bottles
- com.github.tchx84.Flatseal - com.github.tchx84.Flatseal
- com.github.wwmm.easyeffects - com.github.wwmm.easyeffects
- org.onlyoffice.desktopeditors
- io.gitlab.adhami3310.Impression - io.gitlab.adhami3310.Impression
- io.ente.auth - io.ente.auth
- io.github.fastrizwaan.WineZGUI - io.github.fastrizwaan.WineZGUI
@@ -74,6 +74,8 @@
- io.github.flattool.Ignition - io.github.flattool.Ignition
- io.github.bytezz.IPLookup - io.github.bytezz.IPLookup
- org.gaphor.Gaphor - org.gaphor.Gaphor
- io.dbeaver.DBeaverCommunity
- com.jetpackduba.Gitnuro
- name: Define system desired Flatpak remotes - name: Define system desired Flatpak remotes
ansible.builtin.set_fact: ansible.builtin.set_fact:

Some files were not shown because too many files have changed in this diff Show More