Compare commits

...

180 Commits

Author SHA1 Message Date
fd6e7d7a86 Update flake.lock
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-30 16:22:07 +01:00
b23536ecc7 chore: adds discord and gitnuro flatpaks
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-10-30 16:22:03 +01:00
14e9c8d51c chore: remove old stuff
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 7s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-10-30 16:21:17 +01:00
c1c98fa007 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-28 08:36:44 +01:00
9c6e6fdf47 Add Vicinae installation and assets Ansible task
Include Vicinae setup in workstation playbook for non-WSL2 systems

Update flake.lock to newer nixpkgs revision
2025-10-28 08:36:26 +01:00
a11376fe96 Add monitoring countries to allowed_countries_codes list
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-26 00:24:17 +00:00
e14dd1d224 Add EU and trusted country lists for Caddy access control
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 54s
Python Lint Check / check-python (push) Successful in 21s
Define separate lists for EU and trusted countries in group vars. Update
Caddyfile template to support EU, trusted, and combined allow lists.
Switch Sathub domains to use combined country allow list.
2025-10-26 00:21:27 +00:00
5353981555 Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 42s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:09:31 +00:00
f9ce652dfc flake lock
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-26 00:09:15 +00:00
fe9dbca2db Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 02:08:31 +02:00
987166420a Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:06:13 +00:00
8ba47c2ebf Fix indentation in server.yml and add necesse service
Add become: true to JuiceFS stop/start tasks in redis.yml
2025-10-26 00:04:51 +00:00
8bfd8395f5 Add Discord environment variables and update data volumes paths 2025-10-26 00:04:41 +00:00
f0b15f77a1 Update nixpkgs input to latest commit 2025-10-26 00:04:19 +00:00
461d251356 Add Ansible role to deploy Necesse server with Docker 2025-10-26 00:04:14 +00:00
e57e9ee67c chore: update country allow list and add European allow option 2025-10-26 02:02:46 +02:00
f67b16f593 update flake locvk 2025-10-26 02:02:28 +02:00
5edd7c413e Update bash.nix to improve WSL Windows alias handling 2025-10-26 02:02:21 +02:00
cfc1188b5f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-23 13:43:38 +02:00
e2701dcdf4 Set executable permission for equibop.desktop and update bash.nix
Add BUN_INSTALL env var and include Bun bin in PATH
2025-10-23 13:43:26 +02:00
11af7f16e5 Set formatter to prettier and update format_on_save option 2025-10-23 13:38:16 +02:00
310fb92ec9 Add WSL aliases for Windows SSH and Zed
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 51s
Python Lint Check / check-python (push) Successful in 15s
2025-10-23 04:20:15 +02:00
fb1661386b chore: add Bun install path and prepend to PATH
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 8s
2025-10-22 17:57:12 +02:00
e1b07a6edf Add WSL support and fix config formatting
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 1m17s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-22 16:18:08 +02:00
f6a3f6d379 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles 2025-10-21 10:06:20 +02:00
77424506d6 Update Nextcloud config and flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 0s
Nix Format Check / check-format (push) Failing after 0s
Python Lint Check / check-python (push) Failing after 0s
2025-10-20 11:27:21 +02:00
1856b2fb9e adds fastmail app as flatpak 2025-10-20 11:27:00 +02:00
2173e37c0a refactor: update configuration for mennos-server and adjust related tasks
Some checks failed
Nix Format Check / check-format (push) Successful in 1m22s
Python Lint Check / check-python (push) Successful in 25s
Ansible Lint Check / check-ansible (push) Failing after 1h7m12s
2025-10-16 14:53:32 +02:00
ba2faf114d chore: update sathub config
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Successful in 5s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 15:04:46 +02:00
22b308803c fixes
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
2dfde555dd sathub fixes
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
436deb267e Add smart alias configuration for rtlsdr 2025-10-08 13:01:37 +02:00
e490405dc5 Update mennos-rtlsdr-pc home configuration to enable service
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:54:34 +02:00
1485f6c430 Add home configuration for mennos-rtlsdr-pc
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:38:12 +02:00
4c83707a03 Update Ansible inventory and playbook for new workstation; modify Git configuration for rebase settings
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-10-08 12:37:59 +02:00
f9f37f5819 Update flatpaks.yml
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-30 12:02:26 +02:00
44c4521cbe Remove unnecessary blank line before sathub.nl configuration in Caddyfile
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m10s
Python Lint Check / check-python (push) Successful in 6s
2025-09-29 02:53:35 +02:00
6c37372bc0 Remove unused obj.sathub.de configuration and caddy_network from MinIO service in Docker Compose
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 02:40:25 +02:00
3a22417315 Add CORS configuration to SatHub service for improved API access
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 8s
2025-09-29 01:29:55 +02:00
95bc4540db Add SatHub service deployment with Docker Compose and configuration
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 01:21:41 +02:00
902d797480 Refactor Cloudreve restart logic and update configs
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 5s
- Refactor Cloudreve tasks to use conditional restart
- Remove unused displayData from Dashy config
- Add NVM and Japanese input setup to bash.nix
2025-09-25 22:33:57 +02:00
e494369d11 Refactor formatting in update.py for improved readability
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-24 18:40:25 +02:00
78f3133a1d Fix formatting in Python workflow and update .gitignore to include Ansible files
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 18:35:53 +02:00
d28c0fce66 Refactor shell aliases to move folder navigation aliases to the utility section
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 18:32:05 +02:00
c6449affcc Rename zed.jsonc.j2 to zed.jsonc and fix trailing commas
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m8s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:12:34 +02:00
d33f367c5f Move Zed config to Ansible template with 1Password secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 16:10:44 +02:00
e5723e0964 Update zed.jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:04:45 +02:00
0bc609760c change zed settings to use jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 13:36:10 +02:00
edd8e90fec Add JetBrains Toolbox autostart and update Zed config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 13:24:43 +02:00
ee0c73f6de chore: add ssh config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 11:55:46 +02:00
60dd31fd1c Add --system flag to update system packages in update.py
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 17:26:44 +02:00
cc917eb375 Refactor bash config and env vars, set Zed as git editor
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
- Move environment variable exports from sessionVariables to bashrc
- Add more robust sourcing of .profile and .bashrc.local
- Improve SSH_AUTH_SOCK override logic for 1Password
- Remove redundant path and pyenv logic from profileExtra
- Set git core.editor to "zed" instead of "nvim"
- Add DOTFILES_PATH to global session variables
2025-09-23 17:13:24 +02:00
df0775f3b2 Update symlinks.yml
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 16:39:31 +02:00
5f312d3128 wtf 2025-09-23 16:36:08 +02:00
497fca49d9 linting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:29:47 +00:00
e3ea18c9da updated file
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Successful in 1m16s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:20:57 +02:00
6fcabcd1f3 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Successful in 1m17s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:16:09 +02:00
3e25210f4c remove stash, add bazarr, add cloudreve 2025-09-23 16:13:09 +02:00
5ff84a4c0d Remove GNOME extension management from workstation setup
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 14:09:30 +00:00
29a439d095 Add isServer option and conditionally enable Git signing
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:07:10 +00:00
cfb80bd819 linting 2025-09-23 14:06:26 +00:00
8971d087a3 Remove secrets and auto-start actions and update imports
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:59:48 +00:00
40063cfe6b Refactor for consistent string quoting and formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:53:29 +00:00
2e5a06e9d5 Remove mennos-vm from inventory and playbook tasks
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:51:42 +00:00
80ea4cd51b Remove VSCode config and update Zed symlink and settings
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
- Delete VSCode settings and argv files
- Rename Zed settings file and update symlink destination
- Add new Zed context servers and projects
- Change icon and theme settings for Zed
- Add .gitkeep to autostart directory
2025-09-23 13:39:09 +00:00
c659c599f4 fixed formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:35:37 +00:00
54fc080ef2 Remove debug tasks from global.yml and update git signing config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Failing after 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:32:48 +00:00
3d5ae84a25 Add SSH insteadOf rule for git.mvl.sh
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 30s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:21:16 +00:00
dd3753fab4 refactor 2025-09-23 13:20:00 +00:00
a04a4abef6 chore: replace prusaslicer for bambulab slicer since it supports the same printers and works better
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 1s
2025-09-10 12:15:06 +02:00
fd5cb7f163 feat: add 3D printing applications to desired Flatpaks 2025-09-10 12:02:41 +02:00
2e5d7d39ef chore: move scrcpy package to Home Manager
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1s
Nix Format Check / check-format (push) Failing after 1s
Python Lint Check / check-python (push) Failing after 1s
2025-09-09 15:51:49 +02:00
422509eecc Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 2s
2025-09-09 10:41:37 +02:00
c79142e117 Update editor settings and add new Zed projects 2025-09-09 10:41:03 +02:00
2834c1c34e Change VSCode theme to Catppuccin Latte and add new commands
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Failing after 1m16s
Python Lint Check / check-python (push) Failing after 6s
2025-09-04 14:10:21 +02:00
fe73569e0b Add Tdarr and Weather sections to Dashy config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-09-04 14:10:00 +02:00
08d233cae5 Add object storage volume for slow TV shows in Plex config
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:43 +02:00
91c11b0283 Update flake.lock for home-manager and nixpkgs revisions
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:33 +02:00
50b0844db8 Move Sabnzbd to its own network and expose port 7788
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-09-02 11:07:54 +02:00
ad8cb0702d fix: increase memory limit to 2G for arr-stack services
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-08-31 01:43:00 +02:00
216d215663 fix: set dashy default to sametab and add extra hosts for status
resolving of local services and add comfyui to dashy
2025-08-31 01:42:22 +02:00
707a3c0cb7 Add News section and update DiscountOffice config in Dashy
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 1m8s
Python Lint Check / check-python (push) Failing after 6s
Remove config change trigger from Dashy restart tasks
2025-08-29 19:21:30 +02:00
d82a7247cd fix: adds dashy config as managed config in ansible
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m9s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 17:06:51 +02:00
0b7e727fc9 Switch default model provider to copilot_chat
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m21s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 17:04:13 +02:00
a15d382c8e feat: adds dashy as docker service
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 13s
Nix Format Check / check-format (push) Failing after 1m10s
Python Lint Check / check-python (push) Failing after 5s
2025-08-29 16:56:41 +02:00
79425af4b0 fix: add /usr/bin/brave-browser as chrome exec for flutter
Some checks failed
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 6s
Ansible Lint Check / check-ansible (push) Failing after 9s
2025-08-27 16:26:23 +02:00
5ebb22182d re-enable Zen browser
Some checks failed
Nix Format Check / check-format (push) Failing after 1m7s
Ansible Lint Check / check-ansible (push) Failing after 9s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 14:54:38 +02:00
00cff8ba6a Add dev tools in Ansible and update Zed model config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
- Split Ubuntu/Debian package installation with apt conditional
- Include clang, cmake, ninja-build and other development packages
- Switch Zed default AI model to Google Gemini 2.5 via OpenRouter
2025-08-27 14:09:13 +02:00
34bbe5fcf6 remove rustc from nix, we should install this using curl --proto
Some checks failed
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Failing after 8s
'=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
2025-08-27 14:08:37 +02:00
7ada9c7fc4 remove useless comments vscode settings
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-08-27 14:08:15 +02:00
d52671ede7 disable zed telemetry 2025-08-27 14:08:01 +02:00
d9cbe590c5 Add execute permission to List.desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 13:55:45 +02:00
46a9f3e99b adds mennos-laptop as host, adds nextcloud, adds nil and nixd for zed
language servers to work. updates setup to support 25.04
2025-08-27 13:55:31 +02:00
2caea9b483 Update flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1m13s
Nix Format Check / check-format (push) Failing after 1m19s
Python Lint Check / check-python (push) Failing after 5s
2025-08-27 11:31:14 +02:00
7211afd592 Configure PHP and inlay hints in Zed editor
Enable inlay hints and add PHP language server config with license key
placeholder
2025-08-27 11:31:03 +02:00
716f6e4e0a fix: update icon theme in Zed settings to Catppuccin Macchiato
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m24s
Python Lint Check / check-python (push) Failing after 7s
2025-08-24 03:02:25 +02:00
df62070722 fix: correct zed alias assignment in .bashrc
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-08-24 02:54:08 +02:00
f1ca2ad1ba refactor: remove GPU related environment variable settings for mennos-desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 5s
2025-08-24 02:50:39 +02:00
37174d7ccc refactor: update inventory and configuration for desktop systems, replacing 'mennos-cachyos-desktop' with 'mennos-desktop'
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 57s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 7s
2025-08-24 02:44:45 +02:00
134eeb03cb fix: update permissions for io.github.mrvladus.List.desktop and remove Remmina Applet autostart entry
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-08-23 04:47:11 +02:00
8545837b50 fix: update flake.lock with latest revisions and hashes for home-manager and nixpkgs
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 7s
2025-08-23 04:39:23 +02:00
b5227230c0 feat: add memory limits to Docker services in various configurations 2025-08-23 04:39:17 +02:00
9c85d2eea6 feat: update Nextcloud client version and add Remmina Applet autostart entry
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 34s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
2025-08-23 04:30:06 +02:00
34999bdb19 fix: update reference path for Work VPN configuration in secrets.nix
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1s
Nix Format Check / check-format (push) Failing after 1s
Python Lint Check / check-python (push) Failing after 1s
2025-08-15 16:00:04 +02:00
c95b6520a5 feat: update SillyTavern startup command to use direct execution 2025-08-15 15:59:56 +02:00
d42efd6a66 feat: add Equibop Flatpak to desired system Flatpaks list 2025-08-15 15:59:48 +02:00
5edee32509 feat: add llm command for managing sillytavern and koboldcpp
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-08-13 13:26:27 +02:00
bbaa297c6b Update nixpkgs versions and add Kilo Code settings
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 30s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-08-13 13:25:51 +02:00
e274ae7ae1 Update smart-ssh config and improve logging output
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 13s
Nix Format Check / check-format (push) Failing after 24s
Python Lint Check / check-python (push) Failing after 7s
2025-07-31 13:42:13 +02:00
6eb58e2c87 feat: add PKG_CONFIG_PATH for pkg-config and new Music folder sync in Nextcloud config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 24s
Python Lint Check / check-python (push) Failing after 7s
2025-07-31 03:05:07 +02:00
423af55031 fix: updated avorion to latest (2.5.9)
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 7s
2025-07-30 11:28:44 +02:00
cfe96bc3f9 Add Qdrant service configuration
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 7s
2025-07-29 16:11:33 +02:00
72aa7a4647 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 23s
Nix Format Check / check-format (push) Failing after 26s
Python Lint Check / check-python (push) Failing after 6s
2025-07-29 16:10:23 +02:00
58326c7f07 Add Qdrant service deployment to server config 2025-07-29 16:10:14 +02:00
cbbe7b21d8 Add Nextcloud-compatible task management apps
Install List and Iotas Flatpaks with autostart config for List
2025-07-29 16:10:03 +02:00
7d01d476b1 Add running state indicator for systemd timers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 27s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 23:16:54 +02:00
76c2586a21 Add Borg local sync system service and configuration
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 25s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 23:15:49 +02:00
63bd5ace82 Add Telegram notifications for Borg backup status
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 8s
2025-07-28 22:53:56 +02:00
4018399fd4 feat: adds borg, timers and systemd service support 2025-07-27 02:13:33 +02:00
47221e5803 Add Avorion game server configuration 2025-07-27 01:33:41 +02:00
564e45e099 feat: added a ssh utility that supports smart-aliases and background ssh
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
tunnels
2025-07-25 15:37:55 +02:00
f0bf6bc8aa wip
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 22s
Python Lint Check / check-python (push) Failing after 7s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-25 14:54:29 +02:00
b72f42ec5d Install Borg backup package on servers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 7s
2025-07-25 13:45:00 +02:00
21ea904169 Add Nextcloud config and ZapZap autostart
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 37s
Nix Format Check / check-format (push) Failing after 23s
Python Lint Check / check-python (push) Failing after 7s
2025-07-25 11:25:30 +02:00
4d0ff87ece Add Opera to 1Password allowed browsers
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 18s
Python Lint Check / check-python (push) Failing after 6s
2025-07-23 16:28:49 +02:00
ef48cd2691 Port inuse function from bash to Go
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-07-23 16:28:38 +02:00
5bb3f5eee7 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Failing after 25s
Python Lint Check / check-python (push) Failing after 6s
2025-07-23 14:44:09 +02:00
37743d3512 Add Zed config and clean up aliases 2025-07-23 14:43:56 +02:00
2b1c714375 updated utils.yml to work with latest ansible
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:43:05 +02:00
d31d07e0a0 fix: clean & reformat gitconfig files
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Failing after 18s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:30:18 +02:00
dd1b961af0 fix: set default ssh sock based on what is available instead of forcing 1password locally
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 8s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:29:49 +02:00
c8444de0d5 fix: move ~/services to ~/.services
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 33s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-23 14:23:03 +02:00
d6600630bc Remove cloud server configuration files and references and add dynmamic
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
dns Shit
2025-07-22 23:26:31 +02:00
43cc186134 Fix incorrect Finland country code and updated home assitant domain
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 22:09:07 +02:00
4242e037b0 Remove redundant X-Forwarded headers and redirect domains
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Failing after 21s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 21:53:22 +02:00
506e568021 Add SG, AT and CH to allowed countries list
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 9s
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-07-22 21:53:07 +02:00
97d616b7ed Cleanup
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Nix Format Check / check-format (push) Failing after 19s
Python Lint Check / check-python (push) Failing after 7s
2025-07-22 21:33:47 +02:00
9de6098001 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Nix Format Check / check-format (push) Failing after 20s
Python Lint Check / check-python (push) Failing after 6s
2025-07-22 19:23:41 +02:00
faebace545 refactor: migrate arr-stack to mennos-cachyos-desktop 2025-07-22 19:23:40 +02:00
03fd20cdac feat: update allowed countries 2025-07-22 19:23:25 +02:00
b11c9a32a8 feat: add pushall alias to git configurations and update jetbrains toolbox paths
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 33s
Nix Format Check / check-format (push) Failing after 22s
Python Lint Check / check-python (push) Failing after 7s
2025-07-22 14:25:53 +02:00
4a1575594a refactor: swap gitea and uptime-kuma service configurations and update paths
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Nix Format Check / check-format (push) Failing after 16s
Python Lint Check / check-python (push) Failing after 6s
2025-07-21 17:32:29 +02:00
2a8dad2e20 refactor: update service configurations and remove karakeep service
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 47m54s
Nix Format Check / check-format (push) Failing after 37s
Python Lint Check / check-python (push) Has been cancelled
2025-07-21 16:43:00 +02:00
9a2952e192 feat: migrated immich to mennos-cachyos-desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 45s
Nix Format Check / check-format (push) Failing after 2m24s
Python Lint Check / check-python (push) Failing after 49s
2025-07-21 16:24:20 +02:00
21dc2ef11c fix: unifi captive portal SSL fixes
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 46s
Nix Format Check / check-format (push) Failing after 1m41s
Python Lint Check / check-python (push) Failing after 33s
refactor: move photos to cachyos-desktop
2025-07-21 16:12:08 +02:00
f3738cba70 refactor: remove unnecessary game entries from Flatpak configuration
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 38s
Nix Format Check / check-format (push) Failing after 1m39s
Python Lint Check / check-python (push) Failing after 26s
2025-07-20 20:33:49 +02:00
4c1e2842f7 feat: add scrcpy to ansible for Android screen mirroring
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 38s
Nix Format Check / check-format (push) Failing after 1m36s
Python Lint Check / check-python (push) Failing after 29s
2025-07-20 20:19:06 +02:00
f59767597b feat: update Unifi Network Application configuration and add new reverse proxy settings
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 42s
Nix Format Check / check-format (push) Failing after 1m42s
Python Lint Check / check-python (push) Failing after 26s
2025-07-20 19:14:51 +02:00
546e888bdb refactor: update service configurations and remove deprecated files
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 29s
Nix Format Check / check-format (push) Failing after 1m33s
Python Lint Check / check-python (push) Failing after 21s
2025-07-20 03:29:05 +02:00
a9925a63cc feat: update .bashrc to include spicetify and pyenv in PATH
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 42s
Nix Format Check / check-format (push) Failing after 1m39s
Python Lint Check / check-python (push) Failing after 27s
2025-07-20 01:03:06 +02:00
d3e9171c5a fix: update permissions for Nextcloud.desktop to make it executable
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m29s
Python Lint Check / check-python (push) Failing after 20s
2025-07-19 03:27:07 +02:00
488c88604f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 32s
Nix Format Check / check-format (push) Failing after 1m35s
Python Lint Check / check-python (push) Failing after 20s
2025-07-19 03:11:06 +02:00
d4ef158d09 fix: update IdentityAgent path in SSH config to use 1password agent 2025-07-19 03:11:02 +02:00
10374bc2e6 feat: adds nextcloud and plex
fix: caddy stuff
2025-07-19 03:08:16 +02:00
04a0f759c2 feat: add tradaware PEM configuration to secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m26s
Python Lint Check / check-python (push) Failing after 19s
2025-07-18 15:57:30 +02:00
b78cae3c58 fix: conditionally add .spicetify to PATH and comment out Zen browser tasks
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 28s
Nix Format Check / check-format (push) Failing after 1m25s
Python Lint Check / check-python (push) Failing after 19s
2025-07-18 15:51:10 +02:00
2c3159729b fix: add mennos-cachyos-laptop host to configurations
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 45s
Nix Format Check / check-format (push) Failing after 2m20s
Python Lint Check / check-python (push) Failing after 31s
2025-07-18 14:30:59 +02:00
085d037f77 cachyos compatibility
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 48s
Nix Format Check / check-format (push) Failing after 1m49s
Python Lint Check / check-python (push) Failing after 33s
2025-07-18 10:13:33 +02:00
fe80046042 feat: add alias for winget in WSL2 environment
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 27s
Nix Format Check / check-format (push) Failing after 1m21s
Python Lint Check / check-python (push) Failing after 20s
2025-07-16 16:01:35 +02:00
a2a639ec56 feat: add safe directory configuration to gitconfig 2025-07-16 16:01:30 +02:00
b4ff9c95fc fix: ensure correct ownership for .1password directory and .agent-bridge.sh script
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 35s
Nix Format Check / check-format (push) Failing after 1m27s
Python Lint Check / check-python (push) Failing after 22s
2025-07-16 13:27:40 +02:00
c56cf48be9 feat: add NVIDIA runtime configuration for Jellyfin service on specific host
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 29s
Nix Format Check / check-format (push) Failing after 1m28s
Python Lint Check / check-python (push) Failing after 20s
2025-07-16 01:59:51 +00:00
f38c25df9f refactor: update service task loops and Docker network names for consistency; adjust volume paths for Jellyfin service
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m26s
Python Lint Check / check-python (push) Failing after 21s
2025-07-16 01:44:13 +00:00
1219c54bf4 feat: add PBinCLI installation and configuration tasks; update EchoIP service to check for updates and clean up files
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 28s
Nix Format Check / check-format (push) Failing after 1m24s
Python Lint Check / check-python (push) Failing after 19s
2025-07-16 00:34:26 +00:00
9b6aa03872 feat: add PrivateBin service deployment with Docker and configuration files
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 29s
Nix Format Check / check-format (push) Failing after 1m47s
Python Lint Check / check-python (push) Failing after 22s
2025-07-16 01:58:49 +02:00
7dba151053 feat: add WSL2 1Password SSH Agent bridge setup and update README with instructions
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Nix Format Check / check-format (push) Failing after 1m27s
Python Lint Check / check-python (push) Failing after 21s
2025-07-16 01:40:38 +02:00
c8024b36b4 Merge branch 'master' of https://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-07-16 01:27:30 +02:00
30715d7326 fix: update gitconfig symlink for mennos-desktop to point to WSL configuration and add tag for ansible task 2025-07-16 01:27:28 +02:00
1444cf0c6c refactor: consolidate workstation symlink definitions and ensure parent directories exist
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m25s
Python Lint Check / check-python (push) Failing after 18s
2025-07-15 23:23:51 +00:00
6dbe114d83 feat: add pwfeedback to sudoers for improved password user experience
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m25s
Python Lint Check / check-python (push) Failing after 20s
2025-07-15 23:17:27 +00:00
c6b685c5f8 fix: remove Rust tasks and related configuration from global Ansible setup
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 27s
Nix Format Check / check-format (push) Failing after 1m22s
Python Lint Check / check-python (push) Failing after 19s
2025-07-15 23:11:43 +00:00
482d62dde4 feat: add initial configuration for mennos-desktop
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 24s
Nix Format Check / check-format (push) Failing after 1m26s
Python Lint Check / check-python (push) Failing after 19s
2025-07-15 22:52:33 +00:00
0215fc18a0 fix: update hostname from 'mennos-gamingpc' to 'mennos-desktop' in flake and package configurations
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-07-15 22:51:42 +00:00
431a6620bc fix: let ssh config handle ForwardAgent and AddKeysToAgent instead of forcing specific sockets or paths!
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 27s
Nix Format Check / check-format (push) Failing after 1m25s
Python Lint Check / check-python (push) Has been cancelled
2025-07-15 22:49:51 +00:00
ae0c692ff8 refactor: clean up SSH configuration and remove unused secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 26s
Nix Format Check / check-format (push) Failing after 1m25s
Python Lint Check / check-python (push) Failing after 20s
2025-07-15 22:42:30 +00:00
663c4ba003 fix: update PGP ssh configs
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 28s
Nix Format Check / check-format (push) Failing after 1m21s
Python Lint Check / check-python (push) Failing after 18s
2025-07-16 00:34:09 +02:00
bc675522d4 fix: sort block to same as .wsl version
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Nix Format Check / check-format (push) Failing after 1m22s
Python Lint Check / check-python (push) Failing after 21s
2025-07-15 23:52:45 +02:00
7bcb9c24f4 Merge branch 'master' of https://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 25s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-07-15 23:50:53 +02:00
79add50d40 fix: remove redudant block 2025-07-15 23:50:44 +02:00
94998bec48 feat: adds opnix, mennos-laptop-w as host and cleans up secrets
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-07-15 23:50:26 +02:00
251 changed files with 9984 additions and 4161 deletions

186
.bashrc
View File

@@ -1,186 +0,0 @@
# HISTFILE Configuration (Bash equivalent)
HISTFILE=~/.bash_history
HISTSIZE=1000
HISTFILESIZE=2000 # Adjusted to match both histfile and size criteria
# GPU Related shenanigans
if [ "$(hostname)" = "mennos-desktop" ]; then
export DRI_PRIME=1
export MESA_VK_DEVICE_SELECT=1002:744c
fi
if [ -f /etc/os-release ]; then
distro=$(awk -F= '/^NAME/{print $ssss2}' /etc/os-release | tr -d '"')
if [[ "$distro" == *"Pop!_OS"* ]]; then
export CGO_CFLAGS="-I/usr/include"
fi
fi
# Docker Compose Alias (Mostly for old shell scripts)
alias docker-compose='docker compose'
# tatool aliases
alias tls='tatool ls -g'
alias tps='tls'
alias ti='tatool doctor'
alias td='tatool doctor'
alias tr='tatool restart'
alias tsrc='tatool source'
# Modern tools aliases
alias l="eza --header --long --git --group-directories-first --group --icons --color=always --sort=name --hyperlink -o --no-permissions"
alias ll='l'
alias la='l -a'
alias cat='bat'
alias du='dust'
alias df='duf'
alias augp='sudo apt update && sudo apt upgrade -y && sudo apt autopurge -y && sudo apt autoclean'
# Docker Aliases
alias d='docker'
alias dc='docker compose'
alias dce='docker compose exec'
alias dcl='docker compose logs'
alias dcd='docker compose down'
alias dcu='docker compose up'
alias dcp='docker compose ps'
alias dcps='docker compose ps'
alias dcpr='dcp && dcd && dcu -d && dcl -f'
alias dcr='dcd && dcu -d && dcl -f'
alias ddpul='docker compose down && docker compose pull && docker compose up -d && docker compose logs -f'
alias docker-nuke='docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker system prune --all --volumes --force && docker volume prune --force'
# Git aliases
alias g='git'
alias gg='git pull'
alias gl='git log --stat'
alias gp='git push'
alias gs='git status -s'
alias gst='git status'
alias ga='git add'
alias gc='git commit'
alias gcm='git commit -m'
alias gco='git checkout'
alias gcb='git checkout -b'
# Kubernetes aliases (Minikube)
alias kubectl="minikube kubectl --"
# netstat port in use check
alias port='netstat -atupn | grep LISTEN'
# random string (Syntax: random <length>)
alias random='openssl rand -base64'
# Alias for ls to l but only if it's an interactive shell because we don't want to override ls in scripts which could blow up in our face
if [ -t 1 ]; then
alias ls='l'
fi
# PATH Manipulation
export DOTFILES_PATH=$HOME/.dotfiles
export PATH=$PATH:$HOME/.local/bin
export PATH=$PATH:$HOME/.cargo/bin
export PATH=$PATH:$DOTFILES_PATH/bin
# Include pnpm if it exists
if [ -d "$HOME/.local/share/pnpm" ]; then
export PATH=$PATH:$HOME/.local/share/pnpm
fi
# Miniconda
export PATH="$HOME/miniconda3/bin:$PATH"
# In case $HOME/.flutter/flutter/bin is found, we can add it to the PATH
if [ -d "$HOME/.flutter/flutter/bin" ]; then
export PATH=$PATH:$HOME/.flutter/flutter/bin
export PATH="$PATH":"$HOME/.pub-cache/bin"
# Flutter linux fixes:
export CPPFLAGS="-I/usr/include"
export LDFLAGS="-L/usr/lib/x86_64-linux-gnu -lbz2"
export PKG_CONFIG_PATH=/usr/lib/x86_64-linux-gnu/pkgconfig:$PKG_CONFIG_PATH
fi
# Add flatpak to XDG_DATA_DIRS
export XDG_DATA_DIRS=$XDG_DATA_DIRS:/usr/share:/var/lib/flatpak/exports/share:$HOME/.local/share/flatpak/exports/share
# Allow unfree nixos
export NIXPKGS_ALLOW_UNFREE=1
# Allow insecure nixpkgs
export NIXPKGS_ALLOW_INSECURE=1
# 1Password SSH Agent
export SSH_AUTH_SOCK=$HOME/.1password/agent.sock
# Tradaware / DiscountOffice Configuration
if [ -d "/home/menno/Projects/Work" ]; then
export TRADAWARE_DEVOPS=true
fi
# 1Password Source Plugin (Assuming bash compatibility)
if [ -f /home/menno/.config/op/plugins.sh ]; then
source /home/menno/.config/op/plugins.sh
fi
# Initialize starship if available
if ! command -v starship &> /dev/null; then
echo "FYI, starship not found"
else
export STARSHIP_ENABLE_RIGHT_PROMPT=true
export STARSHIP_ENABLE_BASH_CONTINUATION=true
eval "$(starship init bash)"
fi
# Read .op_sat
if [ -f ~/.op_sat ]; then
export OP_SERVICE_ACCOUNT_TOKEN=$(cat ~/.op_sat)
# Ensure .op_sat is 0600 and only readable by the owner
if [ "$(stat -c %a ~/.op_sat)" != "600" ]; then
echo "WARNING: ~/.op_sat is not 0600, please fix this!"
fi
if [ "$(stat -c %U ~/.op_sat)" != "$(whoami)" ]; then
echo "WARNING: ~/.op_sat is not owned by the current user, please fix this!"
fi
fi
# Source nix home-manager
if [ -f "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh"
fi
# Source ble.sh if it exists
if [[ -f "${HOME}/.nix-profile/share/blesh/ble.sh" ]]; then
source "${HOME}/.nix-profile/share/blesh/ble.sh"
# Custom function for fzf history search
function fzf_history_search() {
local selected
selected=$(history | fzf --tac --height=40% --layout=reverse --border --info=inline \
--query="$READLINE_LINE" \
--color 'fg:#ebdbb2,bg:#282828,hl:#fabd2f,fg+:#ebdbb2,bg+:#3c3836,hl+:#fabd2f' \
--color 'info:#83a598,prompt:#bdae93,spinner:#fabd2f,pointer:#83a598,marker:#fe8019,header:#665c54' \
| sed 's/^ *[0-9]* *//')
if [[ -n "$selected" ]]; then
READLINE_LINE="$selected"
READLINE_POINT=${#selected}
fi
ble-redraw-prompt
}
# Bind Ctrl+R to our custom function
bind -x '"\C-r": fzf_history_search'
fi
# In case a basrc.local exists, source it
if [ -f $HOME/.bashrc.local ]; then
source $HOME/.bashrc.local
fi
# Display a welcome message for interactive shells
if [ -t 1 ]; then
helloworld
fi

View File

@@ -3,24 +3,24 @@ name: Python Lint Check
on:
pull_request:
push:
branches: [ master ]
branches: [master]
jobs:
check-python:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install Python linting tools
run: |
python -m pip install --upgrade pip
python -m pip install pylint black
- name: Run pylint
run: |
python_files=$(find . -name "*.py" -type f)
@@ -28,9 +28,9 @@ jobs:
echo "No Python files found to lint"
exit 0
fi
pylint $python_files
pylint --exit-zero $python_files
- name: Check Black formatting
run: |
python_files=$(find . -name "*.py" -type f)
@@ -38,5 +38,5 @@ jobs:
echo "No Python files found to lint"
exit 0
fi
black --check $python_files

15
.gitignore vendored
View File

@@ -1,13 +1,4 @@
config/ssh/config.d/*
!config/ssh/config.d/*.gpg
logs/*
# Don't include secrets in the repository but do include encrypted secrets
!secrets/**/*.gpg
secrets/**/*.*
# SHA256 hashes of the encrypted secrets
*.sha256
# python cache
**/__pycache__/
**/__pycache__/
.ansible/
.ansible/.lock

View File

@@ -1,16 +1,13 @@
# Setup
This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux.
Please install a clean version of either distro with GNOME and then follow the steps below.
Please install a clean version of either distro and then follow the steps below.
## Installation
### 0. Install distro
Download the latest ISO from your desired distro and write it to a USB stick.
I'd recommend getting the GNOME version as it's easier to setup unless you're planning on setting up a server, in that case I recommend getting the server ISO for the specific distro.
#### Note: If you intend on using a desktop environment you should select the GNOME version as this dotfiles repository expects the GNOME desktop environment for various configurations
### 1. Clone dotfiles to home directory
@@ -44,15 +41,6 @@ Run the `dotf update` command, although the setup script did most of the work so
dotf update
```
### 5. Decrypt secrets
Either using 1Password or by manualling providing the decryption key you should decrypt the secrets.
Various configurations depend on the secrets to be decrypted such as the SSH keys, yubikey pam configuration and more.
```bash
dotf secrets decrypt
```
### 6. Profit
You should now have a fully setup system with all the configurations applied.
@@ -65,12 +53,13 @@ Here are some paths that contain files named after the hostname of the system.
If you add a new system you should add the relevant files to these paths.
- `config/ssh/authorized_keys`: Contains the public keys per hostname that will be symlinked to the `~/.ssh/authorized_keys` file.
- `config/home-manager/flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration.
- `flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration.
### Server reboots
In case you reboot a server, it's likely that this runs JuiceFS.
To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online.
```bash
dotf service stop --all
df # confirm JuiceFS is mounted
@@ -81,16 +70,19 @@ dotf service start --all
In case you need to adjust anything regarding the /mnt/object_storage JuiceFS.
Ensure to shut down all services:
```bash
dotf service stop --all
```
Unmount the volume:
```bash
sudo systemctl stop juicefs
```
And optionally if you're going to do something with metadata you might need to stop redis too.
```bash
cd ~/services/juicefs-redis/
docker compose down --remove-orphans
@@ -103,3 +95,36 @@ To add a new system you should follow these steps:
1. Add the relevant files shown in the section above.
2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system.
3. Run `dotf update` to ensure the symlinks are properly updated/created.
---
## Using 1Password SSH Agent with WSL2 (Windows 11)
This setup allows you to use your 1Password-managed SSH keys inside WSL2. The WSL-side steps are automated by Ansible. The following Windows-side steps must be performed manually:
### Windows-side Setup
1. **Enable 1Password SSH Agent**
- Open the 1Password app on Windows.
- Go to **Settings → Developer** and enable **"Use the SSH agent"**.
2. **Install npiperelay using winget**
- Open PowerShell and run the following command:
```sh
winget install albertony.npiperelay
```
- This will install the latest maintained fork of npiperelay and add it to your PATH automatically.
3. **Restart Windows Terminal**
- After completing the above steps, restart your Windows Terminal to ensure all changes take effect.
4. **Test the SSH Agent in WSL2**
- Open your WSL2 terminal and run:
```sh
ssh-add -l
```
- If your 1Password keys are listed, the setup is complete.
#### References
- [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m)
- [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows)

View File

@@ -0,0 +1,82 @@
---
flatpaks: false
install_ui_apps: false
# European countries for EU-specific access control
eu_countries_codes:
- AL # Albania
- AD # Andorra
- AM # Armenia
- AT # Austria
- AZ # Azerbaijan
# - BY # Belarus (Belarus is disabled due to geopolitical reasons)
- BE # Belgium
- BA # Bosnia and Herzegovina
- BG # Bulgaria
- HR # Croatia
- CY # Cyprus
- CZ # Czech Republic
- DK # Denmark
- EE # Estonia
- FI # Finland
- FR # France
- GE # Georgia
- DE # Germany
- GR # Greece
- HU # Hungary
- IS # Iceland
- IE # Ireland
- IT # Italy
- XK # Kosovo
- LV # Latvia
- LI # Liechtenstein
- LT # Lithuania
- LU # Luxembourg
- MK # North Macedonia
- MT # Malta
- MD # Moldova
- MC # Monaco
- ME # Montenegro
- NL # Netherlands
- NO # Norway
- PL # Poland
- PT # Portugal
- RO # Romania
# - RU # Russia (Russia is disabled due to geopolitical reasons)
- SM # San Marino
- RS # Serbia
- SK # Slovakia
- SI # Slovenia
- ES # Spain
- SE # Sweden
- CH # Switzerland
- TR # Turkey
- UA # Ukraine
- GB # United Kingdom
- VA # Vatican City
# Trusted non-EU countries for extended access control
trusted_countries_codes:
- US # United States
- AU # Australia
- NZ # New Zealand
- JP # Japan
# Countries that are allowed to access the server Caddy reverse proxy
allowed_countries_codes:
- US # United States
- GB # United Kingdom
- DE # Germany
- FR # France
- IT # Italy
- NL # Netherlands
- JP # Japan
- KR # South Korea
- CH # Switzerland
- AU # Australia (Added for UpDown.io to monitor server uptime)
- CA # Canada (Added for UpDown.io to monitor server uptime)
- FI # Finland (Added for UpDown.io to monitor server uptime)
- SG # Singapore (Added for UpDown.io to monitor server uptime)
# Enable/disable country blocking globally
enable_country_blocking: true

30
ansible/handlers/main.yml Normal file
View File

@@ -0,0 +1,30 @@
---
- name: Systemctl daemon-reload
become: true
ansible.builtin.systemd:
daemon_reload: true
- name: Restart SSH service
become: true
ansible.builtin.service:
name: ssh
state: restarted
enabled: true
- name: reload systemd
become: true
ansible.builtin.systemd:
daemon_reload: true
- name: restart borg-local-sync
become: true
ansible.builtin.systemd:
name: borg-local-sync.service
enabled: true
- name: restart borg-local-sync-timer
become: true
ansible.builtin.systemd:
name: borg-local-sync.timer
state: restarted
enabled: true

11
ansible/inventory.ini Normal file
View File

@@ -0,0 +1,11 @@
[workstations]
mennos-laptop ansible_connection=local
mennos-desktop ansible_connection=local
[servers]
mennos-vps ansible_connection=local
mennos-server ansible_connection=local
mennos-rtlsdr-pc ansible_connection=local
[wsl]
mennos-desktopw ansible_connection=local

19
ansible/playbook.yml Normal file
View File

@@ -0,0 +1,19 @@
---
- name: Configure all hosts
hosts: all
handlers:
- name: Import handler tasks
ansible.builtin.import_tasks: handlers/main.yml
gather_facts: true
tasks:
- name: Include global tasks
ansible.builtin.import_tasks: tasks/global/global.yml
- name: Include workstation tasks
ansible.builtin.import_tasks: tasks/workstations/workstation.yml
when: inventory_hostname in ['mennos-laptop', 'mennos-desktop']
- name: Include server tasks
ansible.builtin.import_tasks: tasks/servers/server.yml
when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc', 'mennos-desktopw']

View File

@@ -5,19 +5,31 @@
changed_when: false
failed_when: false
# Arch-based distributions (CachyOS, Arch Linux, etc.)
- name: Install Docker on Arch-based systems
community.general.pacman:
name:
- docker
- docker-compose
- docker-buildx
state: present
become: true
when: docker_check.rc != 0 and ansible_pkg_mgr == 'pacman'
# Non-Arch distributions
- name: Download Docker installation script
ansible.builtin.get_url:
url: https://get.docker.com
dest: /tmp/get-docker.sh
mode: "0755"
when: docker_check.rc != 0
when: docker_check.rc != 0 and ansible_pkg_mgr != 'pacman'
- name: Install Docker CE
- name: Install Docker CE on non-Arch systems
ansible.builtin.shell: bash -c 'set -o pipefail && sh /tmp/get-docker.sh'
args:
executable: /bin/bash
creates: /usr/bin/docker
when: docker_check.rc != 0
when: docker_check.rc != 0 and ansible_pkg_mgr != 'pacman'
- name: Add user to docker group
ansible.builtin.user:
@@ -27,25 +39,15 @@
become: true
when: docker_check.rc != 0
- name: Check if docker is running
ansible.builtin.systemd:
name: docker
state: started
enabled: true
become: true
register: docker_service
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true
become: true
when: docker_service.changed
- name: Enable and start docker service
ansible.builtin.systemd:
name: docker
state: started
enabled: true
become: true
when: docker_service.changed
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true
become: true
notify: Reload systemd

View File

@@ -0,0 +1,125 @@
---
- name: Gather package facts
ansible.builtin.package_facts:
manager: auto
become: true
- name: Include Tailscale tasks
ansible.builtin.import_tasks: tasks/global/tailscale.yml
become: true
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include Docker tasks
ansible.builtin.import_tasks: tasks/global/docker.yml
become: true
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include Ollama tasks
ansible.builtin.import_tasks: tasks/global/ollama.yml
become: true
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include OpenSSH Server tasks
ansible.builtin.import_tasks: tasks/global/openssh-server.yml
become: true
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Ensure common packages are installed on Arch-based systems
ansible.builtin.package:
name:
- git
- vim
- curl
- wget
- httpie
- python
- python-pip
- python-pipx
- python-pylint
- go
state: present
become: true
when: ansible_pkg_mgr == 'pacman'
- name: Ensure common packages are installed on non-Arch systems
ansible.builtin.package:
name:
- git
- vim
- curl
- wget
- httpie
- python3
- python3-pip
- python3-venv
- pylint
- black
- pipx
- nala
- golang
state: present
become: true
when: ansible_pkg_mgr != 'pacman'
- name: Configure performance optimizations
ansible.builtin.sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: true
become: true
loop:
- { name: "vm.max_map_count", value: "16777216" }
# --- PBinCLI via pipx ---
- name: Ensure pbincli is installed with pipx
ansible.builtin.command: pipx install pbincli
args:
creates: ~/.local/bin/pbincli
environment:
PIPX_DEFAULT_PYTHON: /usr/bin/python3
become: false
- name: Ensure ~/.config/pbincli directory exists
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/.config/pbincli"
state: directory
mode: "0755"
- name: Configure pbincli to use custom server
ansible.builtin.copy:
dest: "{{ ansible_env.HOME }}/.config/pbincli/pbincli.conf"
content: |
server=https://bin.mvl.sh
mode: "0644"
- name: Include WSL2 tasks
ansible.builtin.import_tasks: tasks/global/wsl.yml
when: "'microsoft-standard-WSL2' in ansible_kernel"
- name: Include Utils tasks
ansible.builtin.import_tasks: tasks/global/utils.yml
become: true
tags: utils
- name: Ensure ~/.hushlogin exists
ansible.builtin.stat:
path: ~/.hushlogin
register: hushlogin_stat
- name: Create ~/.hushlogin if it does not exist
ansible.builtin.file:
path: ~/.hushlogin
state: touch
mode: "0644"
when: not hushlogin_stat.stat.exists
# Ensure pwfeedback is enabled in sudoers for better password UX
- name: Ensure pwfeedback is present in Defaults env_reset line in /etc/sudoers
ansible.builtin.replace:
path: /etc/sudoers
regexp: '^Defaults\s+env_reset(?!.*pwfeedback)'
replace: "Defaults env_reset,pwfeedback"
validate: "visudo -cf %s"
become: true
tags: sudoers

View File

@@ -0,0 +1,36 @@
---
- name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package:
name: openssh
state: present
when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package:
name: openssh-server
state: present
when: ansible_pkg_mgr != 'pacman'
- name: Ensure SSH service is enabled and running on Arch-based systems
ansible.builtin.service:
name: sshd
state: started
enabled: true
when: ansible_pkg_mgr == 'pacman'
- name: Ensure SSH service is enabled and running on non-Arch systems
ansible.builtin.service:
name: ssh
state: started
enabled: true
when: ansible_pkg_mgr != 'pacman'
- name: Ensure SSH server configuration is proper
ansible.builtin.template:
src: templates/sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: "0644"
validate: "/usr/sbin/sshd -t -f %s"
notify: Restart SSH service

View File

@@ -0,0 +1,62 @@
---
- name: Process utils files
block:
- name: Load DOTFILES_PATH environment variable
ansible.builtin.set_fact:
dotfiles_path: "{{ lookup('env', 'DOTFILES_PATH') }}"
become: false
- name: Ensure ~/.local/bin exists
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/.local/bin"
state: directory
mode: "0755"
become: false
- name: Scan utils folder for files
ansible.builtin.find:
paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: file
register: utils_files
become: false
- name: Scan utils folder for Go projects (directories with go.mod)
ansible.builtin.find:
paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: directory
recurse: true
register: utils_dirs
become: false
- name: Filter directories that contain go.mod files
ansible.builtin.stat:
path: "{{ item.path }}/go.mod"
loop: "{{ utils_dirs.files }}"
register: go_mod_check
become: false
- name: Create symlinks for utils scripts
ansible.builtin.file:
src: "{{ item.path }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ item.path | basename }}"
state: link
loop: "{{ utils_files.files }}"
when: not item.path.endswith('.go')
become: false
- name: Compile standalone Go files and place binaries in ~/.local/bin
ansible.builtin.command:
cmd: go build -o "{{ ansible_env.HOME }}/.local/bin/{{ item.path | basename | regex_replace('\.go$', '') }}" "{{ item.path }}"
loop: "{{ utils_files.files }}"
when: item.path.endswith('.go')
become: false
- name: Compile Go projects and place binaries in ~/.local/bin
ansible.builtin.command:
cmd: go build -o "{{ ansible_env.HOME }}/.local/bin/{{ item.item.path | basename }}" .
chdir: "{{ item.item.path }}"
loop: "{{ go_mod_check.results }}"
when: item.stat.exists
become: false
tags:
- utils

View File

@@ -0,0 +1,124 @@
# Dynamic DNS OnePassword Setup
This document explains how to set up the required OnePassword entries for the Dynamic DNS automation.
## Overview
The Dynamic DNS task automatically retrieves credentials from OnePassword using the Ansible OnePassword lookup plugin. This eliminates the need for vault files and provides better security.
## Required OnePassword Entries
### 1. CloudFlare API Token
**Location:** `CloudFlare API Token` in `Dotfiles` vault, field `password`
**Setup Steps:**
1. Go to [CloudFlare API Tokens](https://dash.cloudflare.com/profile/api-tokens)
2. Click "Create Token"
3. Use the "Edit zone DNS" template
4. Configure permissions:
- Zone: DNS: Edit
- Zone Resources: Include all zones (or specific zones for your domains)
5. Add IP address filtering if desired (optional but recommended)
6. Click "Continue to summary" and "Create Token"
7. Copy the token and save it in OnePassword:
- Title: `CloudFlare API Token`
- Vault: `Dotfiles`
- Field: `password` (this should be the main password field)
### 2. Telegram Bot Credentials
**Location:** `Telegram DynDNS Bot` in `Dotfiles` vault, fields `password` and `chat_id`
**Setup Steps:**
#### Create Telegram Bot:
1. Message [@BotFather](https://t.me/BotFather) on Telegram
2. Send `/start` then `/newbot`
3. Follow the prompts to create your bot
4. Save the bot token (format: `123456789:ABCdefGHijklMNopQRstUVwxyz`)
#### Get Chat ID:
1. Send any message to your new bot
2. Visit: `https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates`
3. Look for `"chat":{"id":YOUR_CHAT_ID}` in the response
4. Save the chat ID (format: `987654321` or `-987654321` for groups)
#### Save in OnePassword:
- Title: `Telegram DynDNS Bot`
- Vault: `Dotfiles`
- Fields:
- `password`: Your bot token (123456789:ABCdefGHijklMNopQRstUVwxyz)
- `chat_id`: Your chat ID (987654321)
## Verification
You can test that the OnePassword lookups work by running:
```bash
# Test CloudFlare token lookup
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'CloudFlare API Token', vault='Dotfiles', field='password') }}"
# Test Telegram bot token
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'Telegram DynDNS Bot', vault='Dotfiles', field='password') }}"
# Test Telegram chat ID
ansible localhost -m debug -a "msg={{ lookup('community.general.onepassword', 'Telegram DynDNS Bot', vault='Dotfiles', field='chat_id') }}"
```
## Security Notes
- Credentials are never stored in version control
- Environment file (`~/.local/bin/dynamic-dns.env`) has 600 permissions
- OnePassword CLI must be authenticated before running Ansible
- Make sure to run `op signin` before executing the playbook
## Troubleshooting
### OnePassword CLI Not Authenticated
```bash
op signin
```
### Missing Fields in OnePassword
Ensure the exact field names match:
- CloudFlare: field must be named `password`
- Telegram: fields must be named `password` and `chat_id`
### Invalid CloudFlare Token
- Check token has `Zone:DNS:Edit` permissions
- Verify token is active in CloudFlare dashboard
- Test with: `curl -H "Authorization: Bearer YOUR_TOKEN" https://api.cloudflare.com/client/v4/user/tokens/verify`
### Telegram Not Working
- Ensure you've sent at least one message to your bot
- Verify chat ID format (numbers only, may start with -)
- Test with: `go run dynamic-dns-cf.go --test-telegram`
## Usage
Once set up, the dynamic DNS will automatically:
- Update DNS records every 15 minutes
- Send Telegram notifications when IP changes
- Log all activity to system journal (`journalctl -t dynamic-dns`)
## Domains Configured
The automation updates these domains:
- `vleeuwen.me`
- `mvl.sh`
- `mennovanleeuwen.nl`
To modify the domain list, edit the wrapper script at:
`~/.local/bin/dynamic-dns-update.sh`

View File

@@ -0,0 +1,903 @@
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"net/http"
"os"
"strings"
"time"
)
// CloudFlare API structures
type CloudFlareResponse struct {
Success bool `json:"success"`
Errors []CloudFlareError `json:"errors"`
Result json.RawMessage `json:"result"`
Messages []CloudFlareMessage `json:"messages"`
}
type CloudFlareError struct {
Code int `json:"code"`
Message string `json:"message"`
}
type CloudFlareMessage struct {
Code int `json:"code"`
Message string `json:"message"`
}
type DNSRecord struct {
ID string `json:"id"`
Type string `json:"type"`
Name string `json:"name"`
Content string `json:"content"`
TTL int `json:"ttl"`
ZoneID string `json:"zone_id"`
}
type Zone struct {
ID string `json:"id"`
Name string `json:"name"`
}
type TokenVerification struct {
ID string `json:"id"`
Status string `json:"status"`
}
type NotificationInfo struct {
RecordName string
OldIP string
NewIP string
IsNew bool
}
// Configuration
type Config struct {
APIToken string
RecordNames []string
IPSources []string
DryRun bool
Verbose bool
Force bool
TTL int
TelegramBotToken string
TelegramChatID string
Client *http.Client
}
// Default IP sources
var defaultIPSources = []string{
"https://ifconfig.co/ip",
"https://ip.seeip.org",
"https://ipv4.icanhazip.com",
"https://api.ipify.org",
}
func main() {
config := &Config{
Client: &http.Client{Timeout: 10 * time.Second},
}
// Command line flags
var ipSourcesFlag string
var recordsFlag string
var listZones bool
var testTelegram bool
flag.StringVar(&recordsFlag, "record", "", "DNS A record name(s) to update - comma-separated for multiple (required)")
flag.StringVar(&ipSourcesFlag, "ip-sources", "", "Comma-separated list of IP detection services (optional)")
flag.BoolVar(&config.DryRun, "dry-run", false, "Show what would be done without making changes")
flag.BoolVar(&config.Verbose, "verbose", false, "Enable verbose logging")
flag.BoolVar(&listZones, "list-zones", false, "List all accessible zones and exit")
flag.BoolVar(&config.Force, "force", false, "Force update even if IP hasn't changed")
flag.BoolVar(&testTelegram, "test-telegram", false, "Send a test Telegram notification and exit")
flag.IntVar(&config.TTL, "ttl", 300, "TTL for DNS record in seconds")
// Custom usage function
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "CloudFlare Dynamic DNS Tool\n\n")
fmt.Fprintf(os.Stderr, "Updates CloudFlare DNS A records with your current public IP address.\n")
fmt.Fprintf(os.Stderr, "Supports multiple records, dry-run mode, and Telegram notifications.\n\n")
fmt.Fprintf(os.Stderr, "USAGE:\n")
fmt.Fprintf(os.Stderr, " %s [OPTIONS]\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, "REQUIRED ENVIRONMENT VARIABLES:\n")
fmt.Fprintf(os.Stderr, " CLOUDFLARE_API_TOKEN CloudFlare API token with Zone:DNS:Edit permissions\n")
fmt.Fprintf(os.Stderr, " Get from: https://dash.cloudflare.com/profile/api-tokens\n\n")
fmt.Fprintf(os.Stderr, "OPTIONAL ENVIRONMENT VARIABLES:\n")
fmt.Fprintf(os.Stderr, " TELEGRAM_BOT_TOKEN Telegram bot token for notifications\n")
fmt.Fprintf(os.Stderr, " TELEGRAM_CHAT_ID Telegram chat ID to send notifications to\n\n")
fmt.Fprintf(os.Stderr, "OPTIONS:\n")
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, "\nEXAMPLES:\n")
fmt.Fprintf(os.Stderr, " # Update single record\n")
fmt.Fprintf(os.Stderr, " %s -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Update multiple records\n")
fmt.Fprintf(os.Stderr, " %s -record \"home.example.com,api.example.com,vpn.mydomain.net\"\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Dry run with verbose output\n")
fmt.Fprintf(os.Stderr, " %s -dry-run -verbose -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Force update even if IP hasn't changed\n")
fmt.Fprintf(os.Stderr, " %s -force -record home.example.com\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Custom TTL and IP sources\n")
fmt.Fprintf(os.Stderr, " %s -record home.example.com -ttl 600 -ip-sources \"https://ifconfig.co/ip,https://api.ipify.org\"\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # List accessible CloudFlare zones\n")
fmt.Fprintf(os.Stderr, " %s -list-zones\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, " # Test Telegram notifications\n")
fmt.Fprintf(os.Stderr, " %s -test-telegram\n\n", os.Args[0])
fmt.Fprintf(os.Stderr, "SETUP:\n")
fmt.Fprintf(os.Stderr, " 1. Create CloudFlare API token:\n")
fmt.Fprintf(os.Stderr, " - Go to https://dash.cloudflare.com/profile/api-tokens\n")
fmt.Fprintf(os.Stderr, " - Use 'Edit zone DNS' template\n")
fmt.Fprintf(os.Stderr, " - Select your zones\n")
fmt.Fprintf(os.Stderr, " - Copy token and set CLOUDFLARE_API_TOKEN environment variable\n\n")
fmt.Fprintf(os.Stderr, " 2. Optional: Setup Telegram notifications:\n")
fmt.Fprintf(os.Stderr, " - Message @BotFather on Telegram to create a bot\n")
fmt.Fprintf(os.Stderr, " - Get your chat ID by messaging your bot, then visit:\n")
fmt.Fprintf(os.Stderr, " https://api.telegram.org/bot<BOT_TOKEN>/getUpdates\n")
fmt.Fprintf(os.Stderr, " - Set TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID environment variables\n\n")
fmt.Fprintf(os.Stderr, "NOTES:\n")
fmt.Fprintf(os.Stderr, " - Records can be in different CloudFlare zones\n")
fmt.Fprintf(os.Stderr, " - Only updates when IP actually changes (unless -force is used)\n")
fmt.Fprintf(os.Stderr, " - Supports both root domains and subdomains\n")
fmt.Fprintf(os.Stderr, " - Telegram notifications sent only when IP changes\n")
fmt.Fprintf(os.Stderr, " - Use -dry-run to test without making changes\n\n")
}
flag.Parse()
// Validate required arguments (unless listing zones or testing telegram)
if recordsFlag == "" && !listZones && !testTelegram {
fmt.Fprintf(os.Stderr, "Error: -record flag is required\n")
flag.Usage()
os.Exit(1)
}
// Parse record names
if recordsFlag != "" {
config.RecordNames = strings.Split(recordsFlag, ",")
// Trim whitespace from each record name
for i, record := range config.RecordNames {
config.RecordNames[i] = strings.TrimSpace(record)
}
}
// Get API token from environment
config.APIToken = os.Getenv("CLOUDFLARE_API_TOKEN")
if config.APIToken == "" {
fmt.Fprintf(os.Stderr, "Error: CLOUDFLARE_API_TOKEN environment variable is required\n")
fmt.Fprintf(os.Stderr, "Get your API token from: https://dash.cloudflare.com/profile/api-tokens\n")
fmt.Fprintf(os.Stderr, "Create a token with 'Zone:DNS:Edit' permissions for your zone\n")
os.Exit(1)
}
// Get optional Telegram credentials
config.TelegramBotToken = os.Getenv("TELEGRAM_BOT_TOKEN")
config.TelegramChatID = os.Getenv("TELEGRAM_CHAT_ID")
if config.Verbose && config.TelegramBotToken != "" && config.TelegramChatID != "" {
fmt.Println("Telegram notifications enabled")
}
// Parse IP sources
if ipSourcesFlag != "" {
config.IPSources = strings.Split(ipSourcesFlag, ",")
} else {
config.IPSources = defaultIPSources
}
if config.Verbose {
fmt.Printf("Config: Records=%v, TTL=%d, DryRun=%v, Force=%v, IPSources=%v\n",
config.RecordNames, config.TTL, config.DryRun, config.Force, config.IPSources)
}
// If testing telegram, do that and exit (skip API token validation)
if testTelegram {
if err := testTelegramNotification(config); err != nil {
fmt.Fprintf(os.Stderr, "Error testing Telegram: %v\n", err)
os.Exit(1)
}
return
}
// Validate API token
if err := validateToken(config); err != nil {
fmt.Fprintf(os.Stderr, "Error validating API token: %v\n", err)
os.Exit(1)
}
if config.Verbose {
fmt.Println("API token validated successfully")
}
// If listing zones, do that and exit
if listZones {
if err := listAllZones(config); err != nil {
fmt.Fprintf(os.Stderr, "Error listing zones: %v\n", err)
os.Exit(1)
}
return
}
// Get current public IP
currentIP, err := getCurrentIP(config)
if err != nil {
fmt.Fprintf(os.Stderr, "Error getting current IP: %v\n", err)
os.Exit(1)
}
if config.Verbose {
fmt.Printf("Current public IP: %s\n", currentIP)
fmt.Printf("Processing %d record(s)\n", len(config.RecordNames))
}
// Process each record
var totalUpdates int
var allNotifications []NotificationInfo
for _, recordName := range config.RecordNames {
if config.Verbose {
fmt.Printf("\n--- Processing record: %s ---\n", recordName)
}
// Find the zone for the record
zoneName, zoneID, err := findZoneForRecord(config, recordName)
if err != nil {
fmt.Fprintf(os.Stderr, "Error finding zone for %s: %v\n", recordName, err)
continue
}
if config.Verbose {
fmt.Printf("Found zone: %s (ID: %s)\n", zoneName, zoneID)
}
// Find existing DNS record
record, err := findDNSRecordByName(config, zoneID, recordName)
if err != nil {
fmt.Fprintf(os.Stderr, "Error finding DNS record %s: %v\n", recordName, err)
continue
}
// Compare IPs
if record != nil {
if record.Content == currentIP && !config.Force {
fmt.Printf("DNS record %s already points to %s - no update needed\n", recordName, currentIP)
continue
}
if config.Verbose {
if record.Content == currentIP {
fmt.Printf("DNS record %s already points to %s, but forcing update\n",
recordName, currentIP)
} else {
fmt.Printf("DNS record %s currently points to %s, needs update to %s\n",
recordName, record.Content, currentIP)
}
}
} else {
if config.Verbose {
fmt.Printf("DNS record %s does not exist, will create it\n", recordName)
}
}
// Update or create record
if config.DryRun {
if record != nil {
if record.Content == currentIP && config.Force {
fmt.Printf("DRY RUN: Would force update DNS record %s (already %s)\n",
recordName, currentIP)
} else {
fmt.Printf("DRY RUN: Would update DNS record %s from %s to %s\n",
recordName, record.Content, currentIP)
}
} else {
fmt.Printf("DRY RUN: Would create DNS record %s with IP %s\n",
recordName, currentIP)
}
// Collect notification info for dry-run
if record == nil || record.Content != currentIP || config.Force {
var oldIPForNotification string
if record != nil {
oldIPForNotification = record.Content
}
allNotifications = append(allNotifications, NotificationInfo{
RecordName: recordName,
OldIP: oldIPForNotification,
NewIP: currentIP,
IsNew: record == nil,
})
}
continue
}
var wasUpdated bool
var oldIP string
if record != nil {
oldIP = record.Content
err = updateDNSRecordByName(config, zoneID, record.ID, recordName, currentIP)
if err != nil {
fmt.Fprintf(os.Stderr, "Error updating DNS record %s: %v\n", recordName, err)
continue
}
fmt.Printf("Successfully updated DNS record %s to %s\n", recordName, currentIP)
wasUpdated = true
} else {
err = createDNSRecordByName(config, zoneID, recordName, currentIP)
if err != nil {
fmt.Fprintf(os.Stderr, "Error creating DNS record %s: %v\n", recordName, err)
continue
}
fmt.Printf("Successfully created DNS record %s with IP %s\n", recordName, currentIP)
wasUpdated = true
}
// Collect notification info for actual updates
if wasUpdated && (record == nil || oldIP != currentIP || config.Force) {
allNotifications = append(allNotifications, NotificationInfo{
RecordName: recordName,
OldIP: oldIP,
NewIP: currentIP,
IsNew: record == nil,
})
totalUpdates++
}
}
// Send batch notification if there were any changes
if len(allNotifications) > 0 {
sendBatchTelegramNotification(config, allNotifications, config.DryRun)
}
if !config.DryRun && config.Verbose {
fmt.Printf("\nProcessed %d record(s), %d update(s) made\n", len(config.RecordNames), totalUpdates)
}
}
func validateToken(config *Config) error {
req, err := http.NewRequest("GET", "https://api.cloudflare.com/client/v4/user/tokens/verify", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("token validation failed: %v", cfResp.Errors)
}
var tokenInfo TokenVerification
if err := json.Unmarshal(cfResp.Result, &tokenInfo); err != nil {
return err
}
if tokenInfo.Status != "active" {
return fmt.Errorf("token is not active, status: %s", tokenInfo.Status)
}
return nil
}
func getCurrentIP(config *Config) (string, error) {
var lastError error
for _, source := range config.IPSources {
if config.Verbose {
fmt.Printf("Trying IP source: %s\n", source)
}
resp, err := config.Client.Get(source)
if err != nil {
lastError = err
if config.Verbose {
fmt.Printf("Failed to get IP from %s: %v\n", source, err)
}
continue
}
body, err := io.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
lastError = err
continue
}
if resp.StatusCode != 200 {
lastError = fmt.Errorf("HTTP %d from %s", resp.StatusCode, source)
continue
}
ip := strings.TrimSpace(string(body))
if ip != "" {
return ip, nil
}
lastError = fmt.Errorf("empty response from %s", source)
}
return "", fmt.Errorf("failed to get IP from any source, last error: %v", lastError)
}
func findZoneForRecord(config *Config, recordName string) (string, string, error) {
// Extract domain from record name (e.g., "sub.example.com" -> try "example.com", "com")
parts := strings.Split(recordName, ".")
if config.Verbose {
fmt.Printf("Finding zone for record: %s\n", recordName)
}
for i := 0; i < len(parts); i++ {
zoneName := strings.Join(parts[i:], ".")
req, err := http.NewRequest("GET",
fmt.Sprintf("https://api.cloudflare.com/client/v4/zones?name=%s", zoneName), nil)
if err != nil {
continue
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
continue
}
var cfResp CloudFlareResponse
err = json.NewDecoder(resp.Body).Decode(&cfResp)
resp.Body.Close()
if err != nil || !cfResp.Success {
continue
}
var zones []Zone
if err := json.Unmarshal(cfResp.Result, &zones); err != nil {
continue
}
if len(zones) > 0 {
return zones[0].Name, zones[0].ID, nil
}
}
return "", "", fmt.Errorf("no zone found for record %s", recordName)
}
func findDNSRecordByName(config *Config, zoneID string, recordName string) (*DNSRecord, error) {
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records?type=A&name=%s",
zoneID, recordName)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return nil, err
}
if !cfResp.Success {
return nil, fmt.Errorf("API error: %v", cfResp.Errors)
}
var records []DNSRecord
if err := json.Unmarshal(cfResp.Result, &records); err != nil {
return nil, err
}
if len(records) == 0 {
return nil, nil // Record doesn't exist
}
return &records[0], nil
}
func updateDNSRecordByName(config *Config, zoneID, recordID, recordName, ip string) error {
data := map[string]interface{}{
"type": "A",
"name": recordName,
"content": ip,
"ttl": config.TTL,
}
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records/%s", zoneID, recordID)
req, err := http.NewRequest("PUT", url, bytes.NewBuffer(jsonData))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
return nil
}
func createDNSRecordByName(config *Config, zoneID, recordName, ip string) error {
data := map[string]interface{}{
"type": "A",
"name": recordName,
"content": ip,
"ttl": config.TTL,
}
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/dns_records", zoneID)
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
return nil
}
func listAllZones(config *Config) error {
req, err := http.NewRequest("GET", "https://api.cloudflare.com/client/v4/zones", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+config.APIToken)
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var cfResp CloudFlareResponse
if err := json.NewDecoder(resp.Body).Decode(&cfResp); err != nil {
return err
}
if !cfResp.Success {
return fmt.Errorf("API error: %v", cfResp.Errors)
}
var zones []Zone
if err := json.Unmarshal(cfResp.Result, &zones); err != nil {
return err
}
fmt.Printf("Found %d accessible zones:\n", len(zones))
for _, zone := range zones {
fmt.Printf(" - %s (ID: %s)\n", zone.Name, zone.ID)
}
if len(zones) == 0 {
fmt.Println("No zones found. Make sure your API token has Zone:Read permissions.")
}
return nil
}
func sendTelegramNotification(config *Config, record *DNSRecord, oldIP, newIP string, isDryRun bool) {
// Skip if Telegram is not configured
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return
}
var message string
dryRunPrefix := ""
if isDryRun {
dryRunPrefix = "🧪 DRY RUN - "
}
if record == nil {
message = fmt.Sprintf("%s🆕 DNS Record Created\n\n"+
"Record: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, "test-record", newIP, config.TTL)
} else {
message = fmt.Sprintf("%s🔄 IP Address Changed\n\n"+
"Record: %s\n"+
"Old IP: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, "test-record", oldIP, newIP, config.TTL)
}
// Prepare Telegram API request
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to marshal Telegram payload: %v\n", err)
}
return
}
// Send notification
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
if config.Verbose {
fmt.Printf("Failed to create Telegram request: %v\n", err)
}
return
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to send Telegram notification: %v\n", err)
}
return
}
defer resp.Body.Close()
if resp.StatusCode == 200 {
if config.Verbose {
fmt.Println("Telegram notification sent successfully")
}
} else {
if config.Verbose {
body, _ := io.ReadAll(resp.Body)
fmt.Printf("Telegram notification failed (HTTP %d): %s\n", resp.StatusCode, string(body))
}
}
}
func testTelegramNotification(config *Config) error {
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return fmt.Errorf("Telegram not configured. Set TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID environment variables")
}
fmt.Println("Testing Telegram notification...")
// Send a test message
message := "🧪 Dynamic DNS Test\n\n" +
"This is a test notification from your CloudFlare Dynamic DNS tool.\n\n" +
"✅ Telegram integration is working correctly!"
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
return fmt.Errorf("failed to marshal payload: %v", err)
}
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
return fmt.Errorf("failed to create request: %v", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
return fmt.Errorf("failed to send request: %v", err)
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
if resp.StatusCode == 200 {
fmt.Println("✅ Test notification sent successfully!")
if config.Verbose {
fmt.Printf("Response: %s\n", string(body))
}
return nil
} else {
return fmt.Errorf("failed to send notification (HTTP %d): %s", resp.StatusCode, string(body))
}
}
func sendBatchTelegramNotification(config *Config, notifications []NotificationInfo, isDryRun bool) {
// Skip if Telegram is not configured
if config.TelegramBotToken == "" || config.TelegramChatID == "" {
return
}
if len(notifications) == 0 {
return
}
var message string
dryRunPrefix := ""
if isDryRun {
dryRunPrefix = "🧪 DRY RUN - "
}
if len(notifications) == 1 {
// Single record notification
notif := notifications[0]
if notif.IsNew {
message = fmt.Sprintf("%s🆕 DNS Record Created\n\n"+
"Record: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, notif.RecordName, notif.NewIP, config.TTL)
} else if notif.OldIP == notif.NewIP {
message = fmt.Sprintf("%s🔄 DNS Record Force Updated\n\n"+
"Record: %s\n"+
"IP: %s (unchanged)\n"+
"TTL: %d seconds\n"+
"Note: Forced update requested",
dryRunPrefix, notif.RecordName, notif.NewIP, config.TTL)
} else {
message = fmt.Sprintf("%s🔄 IP Address Changed\n\n"+
"Record: %s\n"+
"Old IP: %s\n"+
"New IP: %s\n"+
"TTL: %d seconds",
dryRunPrefix, notif.RecordName, notif.OldIP, notif.NewIP, config.TTL)
}
} else {
// Multiple records notification
var newCount, updatedCount int
for _, notif := range notifications {
if notif.IsNew {
newCount++
} else {
updatedCount++
}
}
message = fmt.Sprintf("%s📋 Multiple DNS Records Updated\n\n", dryRunPrefix)
if newCount > 0 {
message += fmt.Sprintf("🆕 Created: %d record(s)\n", newCount)
}
if updatedCount > 0 {
message += fmt.Sprintf("🔄 Updated: %d record(s)\n", updatedCount)
}
message += fmt.Sprintf("\nNew IP: %s\nTTL: %d seconds\n\nRecords:", notifications[0].NewIP, config.TTL)
for _, notif := range notifications {
if notif.IsNew {
message += fmt.Sprintf("\n• %s (new)", notif.RecordName)
} else if notif.OldIP == notif.NewIP {
message += fmt.Sprintf("\n• %s (forced)", notif.RecordName)
} else {
message += fmt.Sprintf("\n• %s (%s → %s)", notif.RecordName, notif.OldIP, notif.NewIP)
}
}
}
// Send the notification using the same logic as single notifications
telegramURL := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", config.TelegramBotToken)
payload := map[string]interface{}{
"chat_id": config.TelegramChatID,
"text": message,
"parse_mode": "HTML",
}
jsonData, err := json.Marshal(payload)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to marshal Telegram payload: %v\n", err)
}
return
}
req, err := http.NewRequest("POST", telegramURL, bytes.NewBuffer(jsonData))
if err != nil {
if config.Verbose {
fmt.Printf("Failed to create Telegram request: %v\n", err)
}
return
}
req.Header.Set("Content-Type", "application/json")
resp, err := config.Client.Do(req)
if err != nil {
if config.Verbose {
fmt.Printf("Failed to send Telegram notification: %v\n", err)
}
return
}
defer resp.Body.Close()
if resp.StatusCode == 200 {
if config.Verbose {
fmt.Println("Telegram notification sent successfully")
}
} else {
if config.Verbose {
body, _ := io.ReadAll(resp.Body)
fmt.Printf("Telegram notification failed (HTTP %d): %s\n", resp.StatusCode, string(body))
}
}
}

View File

@@ -0,0 +1,748 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
)
// Color constants for terminal output
const (
Red = "\033[0;31m"
Green = "\033[0;32m"
Yellow = "\033[1;33m"
Blue = "\033[0;34m"
Cyan = "\033[0;36m"
Bold = "\033[1m"
NC = "\033[0m" // No Color
)
// ProcessInfo holds information about a process using a port
type ProcessInfo struct {
PID int
ProcessName string
Protocol string
DockerInfo string
}
// DockerContainer represents a Docker container
type DockerContainer struct {
Name string
Image string
Ports []PortMapping
Network string
}
// PortMapping represents a port mapping
type PortMapping struct {
ContainerPort int
HostPort int
Protocol string
IPv6 bool
}
func main() {
if len(os.Args) < 2 {
showUsage()
os.Exit(1)
}
arg := os.Args[1]
switch arg {
case "--help", "-h":
showHelp()
case "--list", "-l":
listDockerServices()
default:
port, err := strconv.Atoi(arg)
if err != nil || port < 1 || port > 65535 {
fmt.Printf("%sError:%s Invalid port number. Must be between 1 and 65535.\n", Red, NC)
os.Exit(1)
}
checkPort(port)
}
}
func showUsage() {
fmt.Printf("%sUsage:%s inuse <port_number>\n", Red, NC)
fmt.Printf("%s inuse --list%s\n", Yellow, NC)
fmt.Printf("%s inuse --help%s\n", Yellow, NC)
fmt.Printf("%sExample:%s inuse 80\n", Yellow, NC)
fmt.Printf("%s inuse --list%s\n", Yellow, NC)
}
func showHelp() {
fmt.Printf("%s%sinuse - Check if a port is in use%s\n\n", Cyan, Bold, NC)
fmt.Printf("%sUSAGE:%s\n", Bold, NC)
fmt.Printf(" inuse <port_number> Check if a specific port is in use\n")
fmt.Printf(" inuse --list, -l List all Docker services with listening ports\n")
fmt.Printf(" inuse --help, -h Show this help message\n\n")
fmt.Printf("%sEXAMPLES:%s\n", Bold, NC)
fmt.Printf(" %sinuse 80%s Check if port 80 is in use\n", Green, NC)
fmt.Printf(" %sinuse 3000%s Check if port 3000 is in use\n", Green, NC)
fmt.Printf(" %sinuse --list%s Show all Docker services with ports\n\n", Green, NC)
fmt.Printf("%sDESCRIPTION:%s\n", Bold, NC)
fmt.Printf(" The inuse function checks if a specific port is in use and identifies\n")
fmt.Printf(" the process using it. It can detect regular processes, Docker containers\n")
fmt.Printf(" with published ports, and containers using host networking.\n\n")
fmt.Printf("%sOUTPUT:%s\n", Bold, NC)
fmt.Printf(" %s✓%s Port is in use - shows process name, PID, and Docker info if applicable\n", Green, NC)
fmt.Printf(" %s✗%s Port is free\n", Red, NC)
fmt.Printf(" %s⚠%s Port is in use but process cannot be identified\n", Yellow, NC)
}
func listDockerServices() {
if !isDockerAvailable() {
fmt.Printf("%sError:%s Docker is not available\n", Red, NC)
os.Exit(1)
}
fmt.Printf("%s%sDocker Services with Listening Ports:%s\n\n", Cyan, Bold, NC)
containers := getRunningContainers()
if len(containers) == 0 {
fmt.Printf("%sNo running Docker containers found%s\n", Yellow, NC)
return
}
foundServices := false
for _, container := range containers {
if len(container.Ports) > 0 {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s📦 %s%s%s %s(%s)%s\n", Green, Bold, container.Name, NC, Cyan, cleanImage, NC)
for _, port := range container.Ports {
ipv6Marker := ""
if port.IPv6 {
ipv6Marker = " [IPv6]"
}
fmt.Printf("%s ├─ Port %s%d%s%s → %d (%s)%s%s\n",
Cyan, Bold, port.HostPort, NC, Cyan, port.ContainerPort, port.Protocol, ipv6Marker, NC)
}
fmt.Println()
foundServices = true
}
}
// Check for host networking containers
hostContainers := getHostNetworkingContainers()
if len(hostContainers) > 0 {
fmt.Printf("%s%sHost Networking Containers:%s\n", Yellow, Bold, NC)
for _, container := range hostContainers {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s🌐 %s%s%s %s(%s)%s %s- uses host networking%s\n",
Yellow, Bold, container.Name, NC, Cyan, cleanImage, NC, Yellow, NC)
}
fmt.Println()
foundServices = true
}
if !foundServices {
fmt.Printf("%sNo Docker services with exposed ports found%s\n", Yellow, NC)
}
}
func checkPort(port int) {
// Check if port is in use first
if !isPortInUse(port) {
fmt.Printf("%s✗ Port %d is FREE%s\n", Red, port, NC)
os.Exit(1)
}
// Port is in use, now find what's using it
process := findProcessUsingPort(port)
if process != nil {
dockerInfo := ""
if process.DockerInfo != "" {
dockerInfo = " " + process.DockerInfo
}
fmt.Printf("%s✓ Port %d (%s) in use by %s%s%s %sas PID %s%d%s%s\n",
Green, port, process.Protocol, Bold, process.ProcessName, NC, Green, Bold, process.PID, NC, dockerInfo)
return
}
// Check if it's a Docker container
containerInfo := findDockerContainerUsingPort(port)
if containerInfo != "" {
fmt.Printf("%s✓ Port %d in use by Docker container %s\n", Green, port, containerInfo)
return
}
// If we still haven't found the process, check for host networking containers more thoroughly
hostNetworkProcess := findHostNetworkingProcess(port)
if hostNetworkProcess != "" {
fmt.Printf("%s✓ Port %d likely in use by %s\n", Green, port, hostNetworkProcess)
return
}
// If we still haven't found the process
fmt.Printf("%s⚠ Port %d is in use but unable to identify the process%s\n", Yellow, port, NC)
if isDockerAvailable() {
hostContainers := getHostNetworkingContainers()
if len(hostContainers) > 0 {
fmt.Printf("%s Note: Found Docker containers using host networking:%s\n", Cyan, NC)
for _, container := range hostContainers {
cleanImage := cleanImageName(container.Image)
fmt.Printf("%s - %s (%s)%s\n", Cyan, container.Name, cleanImage, NC)
}
fmt.Printf("%s These containers share the host's network, so one of them might be using this port%s\n", Cyan, NC)
} else {
fmt.Printf("%s This might be due to insufficient permissions or the process being in a different namespace%s\n", Cyan, NC)
}
} else {
fmt.Printf("%s This might be due to insufficient permissions or the process being in a different namespace%s\n", Cyan, NC)
}
}
func isPortInUse(port int) bool {
// Try ss first
if isCommandAvailable("ss") {
cmd := exec.Command("ss", "-tulpn")
output, err := cmd.Output()
if err == nil {
portPattern := fmt.Sprintf(":%d ", port)
return strings.Contains(string(output), portPattern)
}
}
// Try netstat as fallback
if isCommandAvailable("netstat") {
cmd := exec.Command("netstat", "-tulpn")
output, err := cmd.Output()
if err == nil {
portPattern := fmt.Sprintf(":%d ", port)
return strings.Contains(string(output), portPattern)
}
}
return false
}
func findProcessUsingPort(port int) *ProcessInfo {
// Method 1: Try netstat
if process := tryNetstat(port); process != nil {
return process
}
// Method 2: Try ss
if process := trySS(port); process != nil {
return process
}
// Method 3: Try lsof
if process := tryLsof(port); process != nil {
return process
}
// Method 4: Try fuser
if process := tryFuser(port); process != nil {
return process
}
return nil
}
func tryNetstat(port int) *ProcessInfo {
if !isCommandAvailable("netstat") {
return nil
}
cmd := exec.Command("netstat", "-tulpn")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "netstat", "-tulpn")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portPattern := fmt.Sprintf(":%d ", port)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, portPattern) {
fields := strings.Fields(line)
if len(fields) >= 7 {
pidProcess := fields[6]
parts := strings.Split(pidProcess, "/")
if len(parts) >= 2 {
if pid, err := strconv.Atoi(parts[0]); err == nil {
processName := parts[1]
protocol := fields[0]
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: protocol,
DockerInfo: dockerInfo,
}
}
}
}
}
}
return nil
}
func trySS(port int) *ProcessInfo {
if !isCommandAvailable("ss") {
return nil
}
cmd := exec.Command("ss", "-tulpn")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "ss", "-tulpn")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portPattern := fmt.Sprintf(":%d ", port)
pidRegex := regexp.MustCompile(`pid=(\d+)`)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, portPattern) {
matches := pidRegex.FindStringSubmatch(line)
if len(matches) >= 2 {
if pid, err := strconv.Atoi(matches[1]); err == nil {
processName := getProcessName(pid)
if processName != "" {
fields := strings.Fields(line)
protocol := ""
if len(fields) > 0 {
protocol = fields[0]
}
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: protocol,
DockerInfo: dockerInfo,
}
}
}
}
}
}
return nil
}
func tryLsof(port int) *ProcessInfo {
if !isCommandAvailable("lsof") {
return nil
}
cmd := exec.Command("lsof", "-i", fmt.Sprintf(":%d", port), "-n", "-P")
output, err := cmd.Output()
if err != nil {
// Try with sudo if available
if isCommandAvailable("sudo") {
cmd = exec.Command("sudo", "lsof", "-i", fmt.Sprintf(":%d", port), "-n", "-P")
output, err = cmd.Output()
if err != nil {
return nil
}
} else {
return nil
}
}
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, "LISTEN") {
fields := strings.Fields(line)
if len(fields) >= 2 {
processName := fields[0]
if pid, err := strconv.Atoi(fields[1]); err == nil {
dockerInfo := getDockerInfo(pid, processName, port)
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: "tcp",
DockerInfo: dockerInfo,
}
}
}
}
}
return nil
}
func tryFuser(port int) *ProcessInfo {
if !isCommandAvailable("fuser") {
return nil
}
cmd := exec.Command("fuser", fmt.Sprintf("%d/tcp", port))
output, err := cmd.Output()
if err != nil {
return nil
}
pids := strings.Fields(string(output))
for _, pidStr := range pids {
if pid, err := strconv.Atoi(strings.TrimSpace(pidStr)); err == nil {
processName := getProcessName(pid)
if processName != "" {
return &ProcessInfo{
PID: pid,
ProcessName: processName,
Protocol: "tcp",
DockerInfo: "",
}
}
}
}
return nil
}
func getProcessName(pid int) string {
cmd := exec.Command("ps", "-p", strconv.Itoa(pid), "-o", "comm=")
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
func getDockerInfo(pid int, processName string, port int) string {
if !isDockerAvailable() {
return ""
}
// Check if it's docker-proxy (handle truncated names like "docker-pr")
if processName == "docker-proxy" || strings.HasPrefix(processName, "docker-pr") {
containerName := getContainerByPublishedPort(port)
if containerName != "" {
image := getContainerImage(containerName)
cleanImage := cleanImageName(image)
return fmt.Sprintf("%s(Docker: %s, image: %s)%s", Cyan, containerName, cleanImage, NC)
}
return fmt.Sprintf("%s(Docker proxy)%s", Cyan, NC)
}
// Check if process is in a Docker container using cgroup
containerInfo := getContainerByPID(pid)
if containerInfo != "" {
return fmt.Sprintf("%s(Docker: %s)%s", Cyan, containerInfo, NC)
}
// Check if this process might be in a host networking container
hostContainer := checkHostNetworkingContainer(pid, processName)
if hostContainer != "" {
return fmt.Sprintf("%s(Docker host network: %s)%s", Cyan, hostContainer, NC)
}
return ""
}
func getContainerByPID(pid int) string {
cgroupPath := fmt.Sprintf("/proc/%d/cgroup", pid)
file, err := os.Open(cgroupPath)
if err != nil {
return ""
}
defer file.Close()
scanner := bufio.NewScanner(file)
containerIDRegex := regexp.MustCompile(`[a-f0-9]{64}`)
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, "docker") {
matches := containerIDRegex.FindStringSubmatch(line)
if len(matches) > 0 {
containerID := matches[0]
containerName := getContainerNameByID(containerID)
if containerName != "" {
return containerName
}
return containerID[:12]
}
}
}
return ""
}
func findDockerContainerUsingPort(port int) string {
if !isDockerAvailable() {
return ""
}
// Check for containers with published ports
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", fmt.Sprintf("publish=%d", port))
output, err := cmd.Output()
if err != nil {
return ""
}
containerName := strings.TrimSpace(string(output))
if containerName != "" {
image := getContainerImage(containerName)
cleanImage := cleanImageName(image)
return fmt.Sprintf("%s%s%s %s(published port, image: %s)%s", Bold, containerName, NC, Cyan, cleanImage, NC)
}
return ""
}
func isDockerAvailable() bool {
return isCommandAvailable("docker")
}
func isCommandAvailable(command string) bool {
_, err := exec.LookPath(command)
return err == nil
}
func getRunningContainers() []DockerContainer {
if !isDockerAvailable() {
return nil
}
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}")
output, err := cmd.Output()
if err != nil {
return nil
}
var containers []DockerContainer
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
containerName := strings.TrimSpace(scanner.Text())
if containerName != "" {
container := DockerContainer{
Name: containerName,
Image: getContainerImage(containerName),
Ports: getContainerPorts(containerName),
}
containers = append(containers, container)
}
}
return containers
}
func getHostNetworkingContainers() []DockerContainer {
if !isDockerAvailable() {
return nil
}
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", "network=host")
output, err := cmd.Output()
if err != nil {
return nil
}
var containers []DockerContainer
scanner := bufio.NewScanner(strings.NewReader(string(output)))
for scanner.Scan() {
containerName := strings.TrimSpace(scanner.Text())
if containerName != "" {
container := DockerContainer{
Name: containerName,
Image: getContainerImage(containerName),
Network: "host",
}
containers = append(containers, container)
}
}
return containers
}
func getContainerImage(containerName string) string {
cmd := exec.Command("docker", "inspect", containerName)
output, err := cmd.Output()
if err != nil {
return ""
}
var inspectData []map[string]interface{}
if err := json.Unmarshal(output, &inspectData); err != nil {
return ""
}
if len(inspectData) > 0 {
if image, ok := inspectData[0]["Config"].(map[string]interface{})["Image"].(string); ok {
return image
}
}
return ""
}
func getContainerPorts(containerName string) []PortMapping {
cmd := exec.Command("docker", "port", containerName)
output, err := cmd.Output()
if err != nil {
return nil
}
var ports []PortMapping
scanner := bufio.NewScanner(strings.NewReader(string(output)))
portRegex := regexp.MustCompile(`(\d+)/(tcp|udp) -> 0\.0\.0\.0:(\d+)`)
ipv6PortRegex := regexp.MustCompile(`(\d+)/(tcp|udp) -> \[::\]:(\d+)`)
for scanner.Scan() {
line := scanner.Text()
// Check for IPv4
if matches := portRegex.FindStringSubmatch(line); len(matches) >= 4 {
containerPort, _ := strconv.Atoi(matches[1])
protocol := matches[2]
hostPort, _ := strconv.Atoi(matches[3])
ports = append(ports, PortMapping{
ContainerPort: containerPort,
HostPort: hostPort,
Protocol: protocol,
IPv6: false,
})
}
// Check for IPv6
if matches := ipv6PortRegex.FindStringSubmatch(line); len(matches) >= 4 {
containerPort, _ := strconv.Atoi(matches[1])
protocol := matches[2]
hostPort, _ := strconv.Atoi(matches[3])
ports = append(ports, PortMapping{
ContainerPort: containerPort,
HostPort: hostPort,
Protocol: protocol,
IPv6: true,
})
}
}
return ports
}
func getContainerByPublishedPort(port int) string {
cmd := exec.Command("docker", "ps", "--format", "{{.Names}}", "--filter", fmt.Sprintf("publish=%d", port))
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
func getContainerNameByID(containerID string) string {
cmd := exec.Command("docker", "inspect", containerID)
output, err := cmd.Output()
if err != nil {
return ""
}
var inspectData []map[string]interface{}
if err := json.Unmarshal(output, &inspectData); err != nil {
return ""
}
if len(inspectData) > 0 {
if name, ok := inspectData[0]["Name"].(string); ok {
return strings.TrimPrefix(name, "/")
}
}
return ""
}
func cleanImageName(image string) string {
// Remove SHA256 hashes
shaRegex := regexp.MustCompile(`sha256:[a-f0-9]*`)
cleaned := shaRegex.ReplaceAllString(image, "[image-hash]")
// Remove registry prefixes, keep only the last part
parts := strings.Split(cleaned, "/")
if len(parts) > 0 {
return parts[len(parts)-1]
}
return cleaned
}
func findHostNetworkingProcess(port int) string {
if !isDockerAvailable() {
return ""
}
// Get all host networking containers
hostContainers := getHostNetworkingContainers()
for _, container := range hostContainers {
// Check if this container might be using the port
if isContainerUsingPort(container.Name, port) {
cleanImage := cleanImageName(container.Image)
return fmt.Sprintf("%s%s%s %s(Docker host network: %s)%s", Bold, container.Name, NC, Cyan, cleanImage, NC)
}
}
return ""
}
func isContainerUsingPort(containerName string, port int) bool {
// Try to execute netstat inside the container to see if it's listening on the port
cmd := exec.Command("docker", "exec", containerName, "sh", "-c",
fmt.Sprintf("netstat -tlnp 2>/dev/null | grep ':%d ' || ss -tlnp 2>/dev/null | grep ':%d '", port, port))
output, err := cmd.Output()
if err != nil {
return false
}
return len(output) > 0
}
func checkHostNetworkingContainer(pid int, processName string) string {
if !isDockerAvailable() {
return ""
}
// Get all host networking containers and check if any match this process
hostContainers := getHostNetworkingContainers()
for _, container := range hostContainers {
// Try to find this process inside the container
cmd := exec.Command("docker", "exec", container.Name, "sh", "-c",
fmt.Sprintf("ps -o pid,comm | grep '%s' | grep -q '%d\\|%s'", processName, pid, processName))
err := cmd.Run()
if err == nil {
cleanImage := cleanImageName(container.Image)
return fmt.Sprintf("%s (%s)", container.Name, cleanImage)
}
}
return ""
}

298
ansible/tasks/global/utils/llm Executable file
View File

@@ -0,0 +1,298 @@
#!/bin/bash
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# Configuration
KOBOLD_PATH="/mnt/data/ai/llm/koboldcpp-linux-x64"
KOBOLD_MODEL="/mnt/data/ai/llm/Mistral-Small-24B-Instruct-2501-Q4_K_M.gguf" # Default model
SILLYTAVERN_SCREEN="sillytavern"
KOBOLD_SCREEN="koboldcpp"
# Function to check if a screen session exists
check_screen() {
screen -ls | grep -q "\.${1}\s"
}
# Function to list available models
list_models() {
echo -e "${BLUE}Available models:${NC}"
ls -1 /mnt/data/ai/llm/*.gguf | nl -w2 -s'. '
}
# Function to select a model
select_model() {
list_models
echo
read -p "Select model number (or press Enter for default): " model_num
if [[ -z "$model_num" ]]; then
echo -e "${YELLOW}Using default model: $(basename "$KOBOLD_MODEL")${NC}"
else
selected_model=$(ls -1 /mnt/data/ai/llm/*.gguf | sed -n "${model_num}p")
if [[ -n "$selected_model" ]]; then
KOBOLD_MODEL="$selected_model"
echo -e "${GREEN}Selected model: $(basename "$KOBOLD_MODEL")${NC}"
else
echo -e "${RED}Invalid selection. Using default model.${NC}"
fi
fi
}
# Function to start SillyTavern
start_sillytavern() {
echo -e "${YELLOW}Starting SillyTavern in screen session '${SILLYTAVERN_SCREEN}'...${NC}"
screen -dmS "$SILLYTAVERN_SCREEN" bash -c "sillytavern --listen 0.0.0.0"
sleep 2
if check_screen "$SILLYTAVERN_SCREEN"; then
echo -e "${GREEN}✓ SillyTavern started successfully!${NC}"
echo -e "${BLUE} Access at: http://0.0.0.0:8000${NC}"
else
echo -e "${RED}✗ Failed to start SillyTavern${NC}"
fi
}
# Function to start KoboldCPP
start_koboldcpp() {
select_model
echo -e "${YELLOW}Starting KoboldCPP in screen session '${KOBOLD_SCREEN}'...${NC}"
screen -dmS "$KOBOLD_SCREEN" bash -c "cd /mnt/data/ai/llm && ./koboldcpp-linux-x64 --model '$KOBOLD_MODEL' --host 0.0.0.0 --port 5001 --contextsize 8192 --gpulayers 999"
sleep 2
if check_screen "$KOBOLD_SCREEN"; then
echo -e "${GREEN}✓ KoboldCPP started successfully!${NC}"
echo -e "${BLUE} Model: $(basename "$KOBOLD_MODEL")${NC}"
echo -e "${BLUE} Access at: http://0.0.0.0:5001${NC}"
else
echo -e "${RED}✗ Failed to start KoboldCPP${NC}"
fi
}
# Function to stop a service
stop_service() {
local service=$1
local screen_name=$2
echo -e "${YELLOW}Stopping ${service}...${NC}"
screen -S "$screen_name" -X quit
sleep 1
if ! check_screen "$screen_name"; then
echo -e "${GREEN}✓ ${service} stopped successfully${NC}"
else
echo -e "${RED}✗ Failed to stop ${service}${NC}"
fi
}
# Function to show service status
show_status() {
echo -e "${CYAN}╔═══════════════════════════════════════╗${NC}"
echo -e "${CYAN}║ Service Status Overview ║${NC}"
echo -e "${CYAN}╚═══════════════════════════════════════╝${NC}"
echo
local st_running=false
local kc_running=false
# Check SillyTavern
if check_screen "$SILLYTAVERN_SCREEN"; then
st_running=true
echo -e " ${GREEN}●${NC} SillyTavern: ${GREEN}Running${NC} (screen: ${SILLYTAVERN_SCREEN})"
echo -e " ${BLUE}→ http://0.0.0.0:8000${NC}"
else
echo -e " ${RED}●${NC} SillyTavern: ${RED}Not running${NC}"
fi
echo
# Check KoboldCPP
if check_screen "$KOBOLD_SCREEN"; then
kc_running=true
echo -e " ${GREEN}●${NC} KoboldCPP: ${GREEN}Running${NC} (screen: ${KOBOLD_SCREEN})"
echo -e " ${BLUE}→ http://0.0.0.0:5001${NC}"
else
echo -e " ${RED}●${NC} KoboldCPP: ${RED}Not running${NC}"
fi
echo
}
# Function to handle service management
manage_services() {
local st_running=$(check_screen "$SILLYTAVERN_SCREEN" && echo "true" || echo "false")
local kc_running=$(check_screen "$KOBOLD_SCREEN" && echo "true" || echo "false")
# If both services are running
if [[ "$st_running" == "true" ]] && [[ "$kc_running" == "true" ]]; then
echo -e "${GREEN}Both services are running.${NC}"
echo
echo "1) Attach to SillyTavern"
echo "2) Attach to KoboldCPP"
echo "3) Restart SillyTavern"
echo "4) Restart KoboldCPP"
echo "5) Stop all services"
echo "6) Exit"
read -p "Your choice (1-6): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to SillyTavern... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$SILLYTAVERN_SCREEN"
;;
2)
echo -e "${BLUE}Attaching to KoboldCPP... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$KOBOLD_SCREEN"
;;
3)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
echo
start_sillytavern
;;
4)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
echo
start_koboldcpp
;;
5)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
;;
6)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If only SillyTavern is running
elif [[ "$st_running" == "true" ]]; then
echo -e "${YELLOW}Only SillyTavern is running.${NC}"
echo
echo "1) Attach to SillyTavern"
echo "2) Start KoboldCPP"
echo "3) Restart SillyTavern"
echo "4) Stop SillyTavern"
echo "5) Exit"
read -p "Your choice (1-5): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to SillyTavern... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$SILLYTAVERN_SCREEN"
;;
2)
start_koboldcpp
;;
3)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
echo
start_sillytavern
;;
4)
stop_service "SillyTavern" "$SILLYTAVERN_SCREEN"
;;
5)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If only KoboldCPP is running
elif [[ "$kc_running" == "true" ]]; then
echo -e "${YELLOW}Only KoboldCPP is running.${NC}"
echo
echo "1) Attach to KoboldCPP"
echo "2) Start SillyTavern"
echo "3) Restart KoboldCPP"
echo "4) Stop KoboldCPP"
echo "5) Exit"
read -p "Your choice (1-5): " choice
case $choice in
1)
echo -e "${BLUE}Attaching to KoboldCPP... (Use Ctrl+A then D to detach)${NC}"
sleep 1
screen -r "$KOBOLD_SCREEN"
;;
2)
start_sillytavern
;;
3)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
echo
start_koboldcpp
;;
4)
stop_service "KoboldCPP" "$KOBOLD_SCREEN"
;;
5)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
# If no services are running
else
echo -e "${YELLOW}No services are running.${NC}"
echo
echo "1) Start both services"
echo "2) Start SillyTavern only"
echo "3) Start KoboldCPP only"
echo "4) Exit"
read -p "Your choice (1-4): " choice
case $choice in
1)
start_sillytavern
echo
start_koboldcpp
;;
2)
start_sillytavern
;;
3)
start_koboldcpp
;;
4)
exit 0
;;
*)
echo -e "${RED}Invalid choice${NC}"
;;
esac
fi
}
# Main script
echo -e "${BLUE}╔═══════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ LLM Services Manager ║${NC}"
echo -e "${BLUE}╚═══════════════════════════════════════╝${NC}"
echo
# Show status
show_status
# Show separator and manage services
echo -e "${CYAN}═══════════════════════════════════════${NC}"
manage_services
echo
echo -e "${BLUE}Quick reference:${NC}"
echo "• List sessions: screen -ls"
echo "• Attach: screen -r <name>"
echo "• Detach: Ctrl+A then D"

View File

@@ -0,0 +1,119 @@
# SSH Utility - Smart SSH Connection Manager
A transparent SSH wrapper that automatically chooses between local and remote connections based on network connectivity.
## What it does
This utility acts as a drop-in replacement for the `ssh` command that intelligently routes connections:
- When you type `ssh desktop`, it automatically checks if your local network is available
- If local: connects via `desktop-local` (faster local connection)
- If remote: connects via `desktop` (Tailscale/VPN connection)
- All other SSH usage passes through unchanged (`ssh --help`, `ssh user@host`, etc.)
## Installation
The utility is automatically compiled and installed to `~/.local/bin/ssh` via Ansible when you run your dotfiles setup.
## Configuration
1. Copy the example config:
```bash
mkdir -p ~/.config/ssh-util
cp ~/.dotfiles/config/ssh-util/config.yaml ~/.config/ssh-util/
```
2. Edit `~/.config/ssh-util/config.yaml` to match your setup:
```yaml
smart_aliases:
desktop:
primary: "desktop-local" # SSH config entry for local connection
fallback: "desktop" # SSH config entry for remote connection
check_host: "192.168.86.22" # IP to ping for connectivity test
timeout: "2s" # Ping timeout
```
3. Ensure your `~/.ssh/config` contains the referenced host entries:
```
Host desktop
HostName mennos-desktop
User menno
Port 400
ForwardAgent yes
AddKeysToAgent yes
Host desktop-local
HostName 192.168.86.22
User menno
Port 400
ForwardAgent yes
AddKeysToAgent yes
```
## Usage
Once configured, simply use SSH as normal:
```bash
# Smart connection - automatically chooses local vs remote
ssh desktop
# All other SSH usage works exactly the same
ssh --help
ssh --version
ssh user@example.com
ssh -L 8080:localhost:80 server
```
## How it works
1. When you run `ssh <alias>`, the utility checks if `<alias>` is defined in the smart_aliases config
2. If yes, it pings the `check_host` IP address
3. If ping succeeds: executes `ssh <primary>` instead
4. If ping fails: executes `ssh <fallback>` instead
5. If not a smart alias: passes through to real SSH unchanged
## Troubleshooting
### SSH utility not found
Make sure `~/.local/bin` is in your PATH:
```bash
echo $PATH | grep -o ~/.local/bin
```
### Config not loading
Check the config file exists and has correct syntax:
```bash
ls -la ~/.config/ssh-util/config.yaml
cat ~/.config/ssh-util/config.yaml
```
### Connectivity test failing
Test manually:
```bash
ping -c 1 -W 2 192.168.86.22
```
### Falls back to real SSH
If there are any errors loading config or parsing, the utility safely falls back to executing the real SSH binary at `/usr/bin/ssh`.
## Adding more aliases
To add more smart aliases, just extend the config:
```yaml
smart_aliases:
desktop:
primary: "desktop-local"
fallback: "desktop"
check_host: "192.168.86.22"
timeout: "2s"
server:
primary: "server-local"
fallback: "server-remote"
check_host: "192.168.1.100"
timeout: "1s"
```
Remember to create the corresponding entries in your `~/.ssh/config`.

View File

@@ -0,0 +1,102 @@
# SSH Utility Configuration
# This file defines smart aliases that automatically choose between local and remote connections
# Logging configuration
logging:
enabled: true
# Levels: debug, info, warn, error
level: "info"
# Formats: console, json
format: "console"
smart_aliases:
desktop:
primary: "desktop-local"
fallback: "desktop"
check_host: "192.168.1.250"
timeout: "2s"
server:
primary: "server-local"
fallback: "server"
check_host: "192.168.1.254"
timeout: "2s"
laptop:
primary: "laptop-local"
fallback: "laptop"
check_host: "192.168.1.253"
timeout: "2s"
rtlsdr:
primary: "rtlsdr-local"
fallback: "rtlsdr"
check_host: "192.168.1.252"
timeout: "2s"
# Background SSH Tunnel Definitions
tunnels:
# Example: Desktop database tunnel
desktop-database:
type: local
local_port: 5432
remote_host: database
remote_port: 5432
ssh_host: desktop # Uses smart alias logic (desktop-local/desktop)
# Example: Development API tunnel
dev-api:
type: local
local_port: 8080
remote_host: api
remote_port: 80
ssh_host: dev-server
# Example: SOCKS proxy tunnel
socks-proxy:
type: dynamic
local_port: 1080
ssh_host: bastion
# Modem web interface tunnel
modem-web:
type: local
local_port: 8443
remote_host: 192.168.1.1
remote_port: 443
ssh_host: desktop
# Tunnel Management Commands:
# ssh --tunnel --open desktop-database (or ssh -TO desktop-database)
# ssh --tunnel --close desktop-database (or ssh -TC desktop-database)
# ssh --tunnel --list (or ssh -TL)
#
# Ad-hoc tunnels (not in config):
# ssh -TO temp-api --local 8080:api:80 --via server
# Logging options:
# - enabled: true/false - whether to show any logs
# - level: debug (verbose), info (normal), warn (warnings only), error (errors only)
# - format: console (human readable), json (structured)
# Logs are written to stderr so they don't interfere with SSH output
# How it works:
# 1. When you run: ssh desktop
# 2. The utility pings 192.168.86.22 with a 2s timeout
# 3. If ping succeeds: runs "ssh desktop-local" instead
# 4. If ping fails: runs "ssh desktop" instead
# 5. All other SSH usage (flags, user@host, etc.) passes through unchanged
# Your SSH config should contain the actual host definitions:
# Host desktop
# HostName mennos-desktop
# User menno
# Port 400
# ForwardAgent yes
# AddKeysToAgent yes
#
# Host desktop-local
# HostName 192.168.86.22
# User menno
# Port 400
# ForwardAgent yes
# AddKeysToAgent yes

View File

@@ -0,0 +1,20 @@
module ssh-util
go 1.21
require (
github.com/jedib0t/go-pretty/v6 v6.4.9
github.com/rs/zerolog v1.31.0
github.com/spf13/cobra v1.8.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/sys v0.12.0 // indirect
)

View File

@@ -0,0 +1,46 @@
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jedib0t/go-pretty/v6 v6.4.9 h1:vZ6bjGg2eBSrJn365qlxGcaWu09Id+LHtrfDWlB2Usc=
github.com/jedib0t/go-pretty/v6 v6.4.9/go.mod h1:Ndk3ase2CkQbXLLNf5QDHoYb6J9WtVfmHZu9n8rk2xs=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.31.0 h1:FcTR3NnLWW+NnTwwhFWiJSZr4ECLpqCm6QsEnyvbV4A=
github.com/rs/zerolog v1.31.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0=
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.4 h1:wZRexSlwd7ZXfKINDLsO4r7WBt3gTKONc6K/VesHvHM=
github.com/stretchr/testify v1.7.4/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,51 @@
---
- name: WSL2 1Password SSH Agent Bridge
block:
- name: Ensure required packages are installed for 1Password sock bridge
ansible.builtin.package:
name:
# 1Password (WSL2 required package for sock bridge)
- socat
state: present
become: true
- name: Ensure .1password directory exists in home
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/.1password"
state: directory
mode: '0700'
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
- name: Create .agent-bridge.sh in home directory
ansible.builtin.copy:
dest: "{{ ansible_env.HOME }}/.agent-bridge.sh"
mode: '0755'
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
content: |
# Code extracted from https://stuartleeks.com/posts/wsl-ssh-key-forward-to-windows/
# (IMPORTANT) Create the folder on your root for the `agent.sock` (How mentioned by @rfay and @Lochnair in the comments)
mkdir -p ~/.1password
# Configure ssh forwarding
export SSH_AUTH_SOCK=$HOME/.1password/agent.sock
# need `ps -ww` to get non-truncated command for matching
# use square brackets to generate a regex match for the process we want but that doesn't match the grep command running it!
ALREADY_RUNNING=$(ps -auxww | grep -q "[n]piperelay.exe -ei -s //./pipe/openssh-ssh-agent"; echo $?)
if [[ $ALREADY_RUNNING != "0" ]]; then
if [[ -S $SSH_AUTH_SOCK ]]; then
# not expecting the socket to exist as the forwarding command isn't running (http://www.tldp.org/LDP/abs/html/fto.html)
echo "removing previous socket..."
rm $SSH_AUTH_SOCK
fi
echo "Starting SSH-Agent relay..."
# setsid to force new session to keep running
# set socat to listen on $SSH_AUTH_SOCK and forward to npiperelay which then forwards to openssh-ssh-agent on windows
(setsid socat UNIX-LISTEN:$SSH_AUTH_SOCK,fork EXEC:"npiperelay.exe -ei -s //./pipe/openssh-ssh-agent",nofork &) >/dev/null 2>&1
fi
tags:
- wsl
- wsl2

View File

@@ -0,0 +1,93 @@
---
- name: Borg Backup Installation and Configuration
block:
- name: Check if Borg is already installed
ansible.builtin.command: which borg
register: borg_check
ignore_errors: true
changed_when: false
- name: Ensure Borg is installed
ansible.builtin.package:
name: borg
state: present
become: true
when: borg_check.rc != 0
- name: Set Borg backup facts
ansible.builtin.set_fact:
borg_passphrase: "{{ lookup('community.general.onepassword', 'Borg Backup', vault='Dotfiles', field='password') }}"
borg_config_dir: "{{ ansible_env.HOME }}/.config/borg"
borg_backup_dir: "/mnt/services"
borg_repo_dir: "/mnt/object_storage/borg-repo"
- name: Create Borg directories
ansible.builtin.file:
path: "{{ borg_dir }}"
state: directory
mode: "0755"
loop:
- "{{ borg_config_dir }}"
- "/mnt/object_storage"
loop_control:
loop_var: borg_dir
become: true
- name: Check if Borg repository exists
ansible.builtin.stat:
path: "{{ borg_repo_dir }}/config"
register: borg_repo_check
become: true
- name: Initialize Borg repository
ansible.builtin.command: >
borg init --encryption=repokey {{ borg_repo_dir }}
environment:
BORG_PASSPHRASE: "{{ borg_passphrase }}"
become: true
when: not borg_repo_check.stat.exists
- name: Create Borg backup script
ansible.builtin.template:
src: templates/borg-backup.sh.j2
dest: "{{ borg_config_dir }}/backup.sh"
mode: "0755"
become: true
- name: Create Borg systemd service
ansible.builtin.template:
src: templates/borg-backup.service.j2
dest: /etc/systemd/system/borg-backup.service
mode: "0644"
become: true
register: borg_service
- name: Create Borg systemd timer
ansible.builtin.template:
src: templates/borg-backup.timer.j2
dest: /etc/systemd/system/borg-backup.timer
mode: "0644"
become: true
register: borg_timer
- name: Reload systemd daemon
ansible.builtin.systemd:
daemon_reload: true
become: true
when: borg_service.changed or borg_timer.changed
- name: Enable and start Borg backup timer
ansible.builtin.systemd:
name: borg-backup.timer
enabled: true
state: started
become: true
- name: Display Borg backup status
ansible.builtin.debug:
msg: "Borg backup is configured and will run daily at 2 AM. Logs available at /var/log/borg-backup.log"
tags:
- borg-backup
- borg
- backup

View File

@@ -0,0 +1,95 @@
---
- name: Borg Local Sync Installation and Configuration
block:
- name: Set Borg backup facts
ansible.builtin.set_fact:
borg_passphrase: "{{ lookup('community.general.onepassword', 'Borg Backup', vault='Dotfiles', field='password') }}"
borg_config_dir: "{{ ansible_env.HOME }}/.config/borg"
borg_backup_dir: "/mnt/services"
borg_repo_dir: "/mnt/object_storage/borg-repo"
- name: Create Borg local sync script
template:
src: borg-local-sync.sh.j2
dest: /usr/local/bin/borg-local-sync.sh
mode: "0755"
owner: root
group: root
become: yes
tags:
- borg-local-sync
- name: Create Borg local sync systemd service
template:
src: borg-local-sync.service.j2
dest: /etc/systemd/system/borg-local-sync.service
mode: "0644"
owner: root
group: root
become: yes
notify:
- reload systemd
tags:
- borg-local-sync
- name: Create Borg local sync systemd timer
template:
src: borg-local-sync.timer.j2
dest: /etc/systemd/system/borg-local-sync.timer
mode: "0644"
owner: root
group: root
become: yes
notify:
- reload systemd
- restart borg-local-sync-timer
tags:
- borg-local-sync
- name: Create log file for Borg local sync
file:
path: /var/log/borg-local-sync.log
state: touch
owner: root
group: root
mode: "0644"
become: yes
tags:
- borg-local-sync
- name: Enable and start Borg local sync timer
systemd:
name: borg-local-sync.timer
enabled: yes
state: started
daemon_reload: yes
become: yes
tags:
- borg-local-sync
- name: Add logrotate configuration for Borg local sync
copy:
content: |
/var/log/borg-local-sync.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 644 root root
}
dest: /etc/logrotate.d/borg-local-sync
mode: "0644"
owner: root
group: root
become: yes
tags:
- borg-local-sync
- borg
- backup
tags:
- borg-local-sync
- borg
- backup

View File

@@ -0,0 +1,88 @@
---
- name: Dynamic DNS setup
block:
- name: Create systemd environment file for dynamic DNS
ansible.builtin.template:
src: "{{ playbook_dir }}/templates/dynamic-dns-systemd.env.j2"
dest: "/etc/dynamic-dns-systemd.env"
mode: "0600"
owner: root
group: root
become: true
- name: Create dynamic DNS wrapper script
ansible.builtin.copy:
dest: "/usr/local/bin/dynamic-dns-update.sh"
mode: "0755"
content: |
#!/bin/bash
# Run dynamic DNS update (binary compiled by utils.yml)
{{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl,sathub.de,sathub.nl" 2>&1 | logger -t dynamic-dns
become: true
- name: Create dynamic DNS systemd timer
ansible.builtin.copy:
dest: "/etc/systemd/system/dynamic-dns.timer"
mode: "0644"
content: |
[Unit]
Description=Dynamic DNS Update Timer
Requires=dynamic-dns.service
[Timer]
OnCalendar=*:0/15
Persistent=true
[Install]
WantedBy=timers.target
become: true
register: ddns_timer
- name: Create dynamic DNS systemd service
ansible.builtin.copy:
dest: "/etc/systemd/system/dynamic-dns.service"
mode: "0644"
content: |
[Unit]
Description=Dynamic DNS Update
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/dynamic-dns-update.sh
EnvironmentFile=/etc/dynamic-dns-systemd.env
User={{ ansible_user }}
Group={{ ansible_user }}
[Install]
WantedBy=multi-user.target
become: true
register: ddns_service
- name: Reload systemd daemon
ansible.builtin.systemd:
daemon_reload: true
become: true
when: ddns_timer.changed or ddns_service.changed
- name: Enable and start dynamic DNS timer
ansible.builtin.systemd:
name: dynamic-dns.timer
enabled: true
state: started
become: true
- name: Display setup completion message
ansible.builtin.debug:
msg: |
Dynamic DNS setup complete!
- Systemd timer: sudo systemctl status dynamic-dns.timer
- Check logs: sudo journalctl -u dynamic-dns.service -f
- Manual run: sudo /usr/local/bin/dynamic-dns-update.sh
- Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl
when: inventory_hostname == 'mennos-server' or inventory_hostname == 'mennos-vps'
tags:
- dynamic-dns

View File

@@ -70,7 +70,7 @@
- name: Include JuiceFS Redis tasks
ansible.builtin.include_tasks: services/redis/redis.yml
when: inventory_hostname == 'mennos-cloud-server'
when: inventory_hostname == 'mennos-server'
- name: Enable and start JuiceFS service
ansible.builtin.systemd:

View File

@@ -0,0 +1,165 @@
---
- name: Server setup
block:
- name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package:
name: openssh
state: present
when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package:
name: openssh-server
state: present
when: ansible_pkg_mgr != 'pacman'
- name: Ensure Borg is installed on Arch-based systems
ansible.builtin.package:
name: borg
state: present
become: true
when: ansible_pkg_mgr == 'pacman'
- name: Ensure Borg is installed on Debian/Ubuntu systems
ansible.builtin.package:
name: borgbackup
state: present
become: true
when: ansible_pkg_mgr != 'pacman'
- name: Include JuiceFS tasks
ansible.builtin.include_tasks: juicefs.yml
tags:
- juicefs
- name: Include Dynamic DNS tasks
ansible.builtin.include_tasks: dynamic-dns.yml
tags:
- dynamic-dns
- name: Include Borg Backup tasks
ansible.builtin.include_tasks: borg-backup.yml
tags:
- borg-backup
- name: Include Borg Local Sync tasks
ansible.builtin.include_tasks: borg-local-sync.yml
tags:
- borg-local-sync
- name: System performance optimizations
ansible.posix.sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: true
become: true
loop:
- { name: "fs.file-max", value: "2097152" } # Max open files for the entire system
- { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have
- { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory
- { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches
- { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket
- { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input
- { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state
- { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections
- name: Include service tasks
ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml"
loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}"
loop_control:
label: "{{ item.name }}"
tags:
- services
- always
vars:
services:
- name: dashy
enabled: true
hosts:
- mennos-server
- name: gitea
enabled: true
hosts:
- mennos-server
- name: factorio
enabled: true
hosts:
- mennos-server
- name: dozzle
enabled: true
hosts:
- mennos-server
- name: beszel
enabled: true
hosts:
- mennos-server
- name: caddy
enabled: true
hosts:
- mennos-server
- name: golink
enabled: true
hosts:
- mennos-server
- name: immich
enabled: true
hosts:
- mennos-server
- name: plex
enabled: true
hosts:
- mennos-server
- name: tautulli
enabled: true
hosts:
- mennos-server
- name: downloaders
enabled: true
hosts:
- mennos-server
- name: wireguard
enabled: true
hosts:
- mennos-server
- name: nextcloud
enabled: true
hosts:
- mennos-server
- name: cloudreve
enabled: true
hosts:
- mennos-server
- name: echoip
enabled: true
hosts:
- mennos-server
- name: arr-stack
enabled: true
hosts:
- mennos-server
- name: home-assistant
enabled: true
hosts:
- mennos-server
- name: privatebin
enabled: true
hosts:
- mennos-server
- name: unifi-network-application
enabled: true
hosts:
- mennos-server
- name: avorion
enabled: false
hosts:
- mennos-server
- name: sathub
enabled: true
hosts:
- mennos-server
- name: necesse
enabled: true
hosts:
- mennos-server

View File

@@ -3,8 +3,8 @@
block:
- name: Set ArrStack directories
ansible.builtin.set_fact:
arr_stack_service_dir: "{{ ansible_env.HOME }}/services/arr-stack"
arr_stack_data_dir: "/mnt/object_storage/services/arr-stack"
arr_stack_service_dir: "{{ ansible_env.HOME }}/.services/arr-stack"
arr_stack_data_dir: "/mnt/services/arr-stack"
- name: Create ArrStack directory
ansible.builtin.file:
@@ -35,3 +35,4 @@
tags:
- services
- arr_stack
- arr-stack

View File

@@ -0,0 +1,182 @@
name: arr-stack
services:
radarr:
container_name: radarr
image: lscr.io/linuxserver/radarr:latest
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
ports:
- 7878:7878
extra_hosts:
- host.docker.internal:host-gateway
volumes:
- {{ arr_stack_data_dir }}/radarr-config:/config
- /mnt/data:/mnt/data
restart: "unless-stopped"
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 2G
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
volumes:
- {{ arr_stack_data_dir }}/sonarr-config:/config
- /mnt/data:/mnt/data
ports:
- 8989:8989
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 2G
bazarr:
image: ghcr.io/hotio/bazarr:latest
container_name: bazarr
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
ports:
- 6767:6767
extra_hosts:
- host.docker.internal:host-gateway
volumes:
- {{ arr_stack_data_dir }}/bazarr-config:/config
- /mnt/data:/mnt/data
restart: unless-stopped
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 512M
prowlarr:
container_name: prowlarr
image: linuxserver/prowlarr:latest
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
volumes:
- {{ arr_stack_data_dir }}/prowlarr-config:/config
extra_hosts:
- host.docker.internal:host-gateway
ports:
- 9696:9696
restart: unless-stopped
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 512M
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
- LOG_HTML=${LOG_HTML:-false}
- CAPTCHA_SOLVER=${CAPTCHA_SOLVER:-none}
- TZ=Europe/Amsterdam
ports:
- "8191:8191"
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 1G
overseerr:
image: sctx/overseerr:latest
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
volumes:
- {{ arr_stack_data_dir }}/overseerr-config:/app/config
ports:
- 5055:5055
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- arr_stack_net
- caddy_network
deploy:
resources:
limits:
memory: 512M
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
- internalNode=true
- inContainer=true
- ffmpegVersion=7
- nodeName=MyInternalNode
- auth=false
- openBrowser=true
- maxLogSizeMB=10
- cronPluginUpdate=
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- {{ arr_stack_data_dir }}/tdarr-server:/app/server
- {{ arr_stack_data_dir }}/tdarr-config:/app/configs
- {{ arr_stack_data_dir }}/tdarr-logs:/app/logs
- /mnt/data:/media
- {{ arr_stack_data_dir }}/tdarr-cache:/temp
ports:
- 8265:8265
- 8266:8266
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
runtime: nvidia
devices:
- /dev/dri:/dev/dri
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 4G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
networks:
arr_stack_net:
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,37 @@
---
- name: Deploy Avorion service
block:
- name: Set Avorion directories
ansible.builtin.set_fact:
avorion_service_dir: "{{ ansible_env.HOME }}/.services/avorion"
avorion_data_dir: "/mnt/services/avorion"
- name: Create Avorion directory
ansible.builtin.file:
path: "{{ avorion_service_dir }}"
state: directory
mode: "0755"
- name: Create Avorion data directory
ansible.builtin.file:
path: "{{ avorion_data_dir }}"
state: directory
mode: "0755"
- name: Deploy Avorion docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ avorion_service_dir }}/docker-compose.yml"
mode: "0644"
register: avorion_compose
- name: Stop Avorion service
ansible.builtin.command: docker compose -f "{{ avorion_service_dir }}/docker-compose.yml" down --remove-orphans
when: avorion_compose.changed
- name: Start Avorion service
ansible.builtin.command: docker compose -f "{{ avorion_service_dir }}/docker-compose.yml" up -d
when: avorion_compose.changed
tags:
- services
- avorion

View File

@@ -0,0 +1,15 @@
services:
avorion:
image: rfvgyhn/avorion:latest
volumes:
- {{ avorion_data_dir }}:/home/steam/.avorion/galaxies/avorion_galaxy
ports:
- 27000:27000
- 27000:27000/udp
- 27003:27003/udp
- 27020:27020/udp
- 27021:27021/udp
deploy:
resources:
limits:
memory: 4G

View File

@@ -3,8 +3,8 @@
block:
- name: Set Beszel directories
ansible.builtin.set_fact:
beszel_service_dir: "{{ ansible_env.HOME }}/services/beszel"
beszel_data_dir: "/mnt/object_storage/services/beszel"
beszel_service_dir: "{{ ansible_env.HOME }}/.services/beszel"
beszel_data_dir: "/mnt/services/beszel"
- name: Create Beszel directory
ansible.builtin.file:

View File

@@ -10,6 +10,10 @@ services:
networks:
- beszel-net
- caddy_network
deploy:
resources:
limits:
memory: 256M
beszel-agent:
image: henrygd/beszel-agent:latest
@@ -20,7 +24,11 @@ services:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
LISTEN: /beszel_socket/beszel.sock
KEY: 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2e4Eg8BrcYOVZ5MaEdrxErM/HA4Tc0ANxPQNcCwFwY'
KEY: 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKkSIQDh1vS8lG+2Uw/9dK1eOgCHVCgQfP+Bfk4XPkdn'
deploy:
resources:
limits:
memory: 128M
networks:
beszel-net:

View File

@@ -0,0 +1,354 @@
# Global configuration for country blocking
{
servers {
protocols h1 h2 h3
}
}
# Country allow list snippet using MaxMind GeoLocation - reusable across all sites
{% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %}
(country_allow) {
@allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ allowed_countries_codes | join(' ') }}
}
}
}
respond @not_allowed_countries "Access denied" 403
}
{% else %}
(country_allow) {
# Country allow list disabled
}
{% endif %}
# European country allow list - allows all European countries only
{% if eu_countries_codes | default([]) | length > 0 %}
(eu_country_allow) {
@eu_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@eu_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ eu_countries_codes | join(' ') }}
}
}
}
respond @eu_not_allowed_countries "Access denied" 403
}
{% else %}
(eu_country_allow) {
# EU country allow list disabled
}
{% endif %}
# Trusted country allow list - allows US, Australia, New Zealand, and Japan
{% if trusted_countries_codes | default([]) | length > 0 %}
(trusted_country_allow) {
@trusted_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@trusted_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ trusted_countries_codes | join(' ') }}
}
}
}
respond @trusted_not_allowed_countries "Access denied" 403
}
{% else %}
(trusted_country_allow) {
# Trusted country allow list disabled
}
{% endif %}
# Sathub country allow list - combines EU and trusted countries
{% if eu_countries_codes | default([]) | length > 0 and trusted_countries_codes | default([]) | length > 0 %}
(sathub_country_allow) {
@sathub_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@sathub_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ (eu_countries_codes + trusted_countries_codes) | join(' ') }}
}
}
}
respond @sathub_not_allowed_countries "Access denied" 403
}
{% else %}
(sathub_country_allow) {
# Sathub country allow list disabled
}
{% endif %}
{% if inventory_hostname == 'mennos-server' %}
git.mvl.sh {
import country_allow
reverse_proxy gitea:3000
tls {{ caddy_email }}
}
git.vleeuwen.me {
import country_allow
redir https://git.mvl.sh{uri}
tls {{ caddy_email }}
}
df.mvl.sh {
import country_allow
redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh
tls {{ caddy_email }}
}
fsm.mvl.sh {
import country_allow
reverse_proxy factorio-server-manager:80
tls {{ caddy_email }}
}
fsm.vleeuwen.me {
import country_allow
redir https://fsm.mvl.sh{uri}
tls {{ caddy_email }}
}
beszel.mvl.sh {
import country_allow
reverse_proxy beszel:8090
tls {{ caddy_email }}
}
beszel.vleeuwen.me {
import country_allow
redir https://beszel.mvl.sh{uri}
tls {{ caddy_email }}
}
sathub.de {
import sathub_country_allow
handle {
reverse_proxy sathub-frontend:4173
}
# Enable compression
encode gzip
# Security headers
header {
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Strict-Transport-Security "max-age=31536000; includeSubDomains"
}
tls {{ caddy_email }}
}
api.sathub.de {
import sathub_country_allow
reverse_proxy sathub-backend:4001
tls {{ caddy_email }}
}
sathub.nl {
import sathub_country_allow
redir https://sathub.de{uri}
tls {{ caddy_email }}
}
photos.mvl.sh {
import country_allow
reverse_proxy immich:2283
tls {{ caddy_email }}
}
photos.vleeuwen.me {
import country_allow
redir https://photos.mvl.sh{uri}
tls {{ caddy_email }}
}
home.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
home.vleeuwen.me {
import country_allow
reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
unifi.mvl.sh {
reverse_proxy unifi-controller:8443 {
transport http {
tls_insecure_skip_verify
}
header_up Host {host}
}
tls {{ caddy_email }}
}
hotspot.mvl.sh {
reverse_proxy unifi-controller:8843 {
transport http {
tls_insecure_skip_verify
}
header_up Host {host}
}
tls {{ caddy_email }}
}
hotspot.mvl.sh:80 {
redir https://hotspot.mvl.sh{uri} permanent
}
bin.mvl.sh {
import country_allow
reverse_proxy privatebin:8080
tls {{ caddy_email }}
}
ip.mvl.sh ip.vleeuwen.me {
import country_allow
reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
http://ip.mvl.sh http://ip.vleeuwen.me {
import country_allow
reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host}
}
}
overseerr.mvl.sh {
import country_allow
reverse_proxy overseerr:5055
tls {{ caddy_email }}
}
overseerr.vleeuwen.me {
import country_allow
redir https://overseerr.mvl.sh{uri}
tls {{ caddy_email }}
}
plex.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:32400 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
plex.vleeuwen.me {
import country_allow
redir https://plex.mvl.sh{uri}
tls {{ caddy_email }}
}
tautulli.mvl.sh {
import country_allow
reverse_proxy host.docker.internal:8181 {
header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
tautulli.vleeuwen.me {
import country_allow
redir https://tautulli.mvl.sh{uri}
tls {{ caddy_email }}
}
cloud.mvl.sh {
import country_allow
reverse_proxy cloudreve:5212 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
cloud.vleeuwen.me {
import country_allow
redir https://cloud.mvl.sh{uri}
tls {{ caddy_email }}
}
collabora.mvl.sh {
import country_allow
reverse_proxy collabora:9980 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
drive.mvl.sh drive.vleeuwen.me {
import country_allow
# CalDAV and CardDAV redirects
redir /.well-known/carddav /remote.php/dav/ 301
redir /.well-known/caldav /remote.php/dav/ 301
# Handle other .well-known requests
handle /.well-known/* {
reverse_proxy nextcloud:80 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
}
# Main reverse proxy configuration with proper headers
reverse_proxy nextcloud:80 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
# Security headers
header {
# HSTS header for enhanced security (required by Nextcloud)
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Additional security headers recommended for Nextcloud
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "no-referrer"
X-XSS-Protection "1; mode=block"
X-Permitted-Cross-Domain-Policies "none"
X-Robots-Tag "noindex, nofollow"
}
tls {{ caddy_email }}
}
{% endif %}

View File

@@ -0,0 +1,59 @@
---
- name: Deploy Caddy service
block:
- name: Set Caddy directories
ansible.builtin.set_fact:
caddy_service_dir: "{{ ansible_env.HOME }}/.services/caddy"
caddy_data_dir: "/mnt/services/caddy"
geoip_db_path: "/mnt/services/echoip"
caddy_email: "{{ lookup('community.general.onepassword', 'Caddy (Proxy)', vault='Dotfiles', field='email') }}"
- name: Create Caddy directory
ansible.builtin.file:
path: "{{ caddy_service_dir }}"
state: directory
mode: "0755"
- name: Setup country blocking
ansible.builtin.include_tasks: country-blocking.yml
- name: Copy Dockerfile for custom Caddy build
ansible.builtin.copy:
src: Dockerfile
dest: "{{ caddy_service_dir }}/Dockerfile"
mode: "0644"
register: caddy_dockerfile
- name: Create Caddy network
ansible.builtin.command: docker network create caddy_default
register: create_caddy_network
failed_when:
- create_caddy_network.rc != 0
- "'already exists' not in create_caddy_network.stderr"
changed_when: create_caddy_network.rc == 0
- name: Deploy Caddy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ caddy_service_dir }}/docker-compose.yml"
mode: "0644"
register: caddy_compose
- name: Deploy Caddy Caddyfile
ansible.builtin.template:
src: Caddyfile.j2
dest: "{{ caddy_service_dir }}/Caddyfile"
mode: "0644"
register: caddy_file
- name: Stop Caddy service
ansible.builtin.command: docker compose -f "{{ caddy_service_dir }}/docker-compose.yml" down --remove-orphans
when: caddy_compose.changed or caddy_file.changed
- name: Start Caddy service
ansible.builtin.command: docker compose -f "{{ caddy_service_dir }}/docker-compose.yml" up -d
when: caddy_compose.changed or caddy_file.changed
tags:
- caddy
- services
- reverse-proxy

View File

@@ -21,6 +21,10 @@ services:
- "host.docker.internal:host-gateway"
networks:
- caddy_network
deploy:
resources:
limits:
memory: 512M
networks:
caddy_network:

View File

@@ -0,0 +1,32 @@
- name: Deploy Cloudreve service
tags:
- services
- cloudreve
block:
- name: Set Cloudreve directories
ansible.builtin.set_fact:
cloudreve_service_dir: "{{ ansible_env.HOME }}/.services/cloudreve"
cloudreve_data_dir: "/mnt/services/cloudreve"
- name: Create Cloudreve directory
ansible.builtin.file:
path: "{{ cloudreve_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Cloudreve docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ cloudreve_service_dir }}/docker-compose.yml"
mode: "0644"
register: cloudreve_compose
- name: Stop Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
when: cloudreve_compose.changed
- name: Start Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" up -d
changed_when: false
when: cloudreve_compose.changed

View File

@@ -0,0 +1,67 @@
services:
cloudreve:
image: cloudreve/cloudreve:latest
depends_on:
- postgresql
- redis
restart: always
ports:
- 5212:5212
networks:
- caddy_network
- cloudreve
environment:
- CR_CONF_Database.Type=postgres
- CR_CONF_Database.Host=postgresql
- CR_CONF_Database.User=cloudreve
- CR_CONF_Database.Name=cloudreve
- CR_CONF_Database.Port=5432
- CR_CONF_Redis.Server=redis:6379
volumes:
- {{ cloudreve_data_dir }}/data:/cloudreve/data
postgresql:
image: postgres:17
environment:
- POSTGRES_USER=cloudreve
- POSTGRES_DB=cloudreve
- POSTGRES_HOST_AUTH_METHOD=trust
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/postgres:/var/lib/postgresql/data
collabora:
image: collabora/code
restart: unless-stopped
ports:
- 9980:9980
environment:
- domain=collabora\\.mvl\\.sh
- username=admin
- password=Dt3hgIJOPr3rgh
- dictionaries=en_US
- TZ=Europe/Amsterdam
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
networks:
- cloudreve
- caddy_network
deploy:
resources:
limits:
memory: 1G
redis:
image: redis:latest
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/redis:/data
networks:
cloudreve:
name: cloudreve
driver: bridge
caddy_network:
name: caddy_default
external: true

View File

@@ -0,0 +1,308 @@
pageInfo:
title: Menno's Home
navLinks: []
sections:
- name: Selfhosted
items:
- title: Plex
icon: http://mennos-server:4000/assets/plex.svg
url: https://plex.mvl.sh
statusCheckUrl: https://plex.mvl.sh/identity
statusCheck: true
id: 0_1035_plex
- title: Tautulli
icon: http://mennos-server:4000/assets/tautulli.svg
url: https://tautulli.mvl.sh
id: 1_1035_tautulli
statusCheck: true
- title: Overseerr
icon: http://mennos-server:4000/assets/overseerr.svg
url: https://overseerr.mvl.sh
id: 2_1035_overseerr
statusCheck: true
- title: Immich
icon: http://mennos-server:4000/assets/immich.svg
url: https://photos.mvl.sh
id: 3_1035_immich
statusCheck: true
- title: Nextcloud
icon: http://mennos-server:4000/assets/nextcloud.svg
url: https://drive.mvl.sh
id: 3_1035_nxtcld
statusCheck: true
- title: ComfyUI
icon: http://mennos-server:8188/assets/favicon.ico
url: http://mennos-server:8188
statusCheckUrl: http://host.docker.internal:8188/api/system_stats
id: 3_1035_comfyui
statusCheck: true
displayData:
sortBy: default
rows: 1
cols: 2
collapsed: false
hideForGuests: false
- name: Media Management
items:
- title: Sonarr
icon: http://mennos-server:4000/assets/sonarr.svg
url: http://go/sonarr
id: 0_1533_sonarr
- title: Radarr
icon: http://mennos-server:4000/assets/radarr.svg
url: http://go/radarr
id: 1_1533_radarr
- title: Prowlarr
icon: http://mennos-server:4000/assets/prowlarr.svg
url: http://go/prowlarr
id: 2_1533_prowlarr
- title: Tdarr
icon: http://mennos-server:4000/assets/tdarr.png
url: http://go/tdarr
id: 3_1533_tdarr
- name: Kagi
items:
- title: Kagi Search
icon: favicon
url: https://kagi.com/
id: 0_380_kagisearch
- title: Kagi Translate
icon: favicon
url: https://translate.kagi.com/
id: 1_380_kagitranslate
- title: Kagi Assistant
icon: favicon
url: https://kagi.com/assistant
id: 2_380_kagiassistant
- name: News
items:
- title: Nu.nl
icon: http://mennos-server:4000/assets/nunl.svg
url: https://www.nu.nl/
id: 0_380_nu
- title: Tweakers.net
icon: favicon
url: https://www.tweakers.net/
id: 1_380_tweakers
- title: NL Times
icon: favicon
url: https://www.nltimes.nl/
id: 2_380_nl_times
- name: Downloaders
items:
- title: qBittorrent
icon: http://mennos-server:4000/assets/qbittorrent.svg
url: http://go/qbit
id: 0_1154_qbittorrent
tags:
- download
- torrent
- yarr
- title: Sabnzbd
icon: http://mennos-server:4000/assets/sabnzbd.svg
url: http://go/sabnzbd
id: 1_1154_sabnzbd
tags:
- download
- nzb
- yarr
- name: Git
items:
- title: GitHub
icon: http://mennos-server:4000/assets/github.svg
url: https://github.com/vleeuwenmenno
id: 0_292_github
tags:
- repo
- git
- hub
- title: Gitea
icon: http://mennos-server:4000/assets/gitea.svg
url: http://git.mvl.sh/vleeuwenmenno
id: 1_292_gitea
tags:
- repo
- git
- tea
- name: Server Monitoring
items:
- title: Beszel
icon: http://mennos-server:4000/assets/beszel.svg
url: http://go/beszel
tags:
- monitoring
- logs
id: 0_1725_beszel
- title: Dozzle
icon: http://mennos-server:4000/assets/dozzle.svg
url: http://go/dozzle
id: 1_1725_dozzle
tags:
- monitoring
- logs
- title: UpDown.io Status
icon: far fa-signal
url: http://go/status
id: 2_1725_updowniostatus
tags:
- monitoring
- logs
- name: Tools
items:
- title: Home Assistant
icon: http://mennos-server:4000/assets/home-assistant.svg
url: http://go/homeassistant
id: 0_529_homeassistant
- title: Tailscale
icon: http://mennos-server:4000/assets/tailscale.svg
url: http://go/tailscale
id: 1_529_tailscale
- title: GliNet KVM
icon: http://mennos-server:4000/assets/glinet.svg
url: http://go/glkvm
id: 2_529_glinetkvm
- title: Unifi Network Controller
icon: http://mennos-server:4000/assets/unifi.svg
url: http://go/unifi
id: 3_529_unifinetworkcontroller
- title: Dashboard Icons
icon: favicon
url: https://dashboardicons.com/
id: 4_529_dashboardicons
- name: Weather
items:
- title: Buienradar
icon: favicon
url: https://www.buienradar.nl/weer/Beverwijk/NL/2758998
id: 0_529_buienradar
- title: ClearOutside
icon: favicon
url: https://clearoutside.com/forecast/52.49/4.66
id: 1_529_clearoutside
- title: Windy
icon: favicon
url: https://www.windy.com/
id: 2_529_windy
- title: Meteoblue
icon: favicon
url: https://www.meteoblue.com/en/country/weather/radar/the-netherlands_the-netherlands_2750405
id: 2_529_meteoblue
- name: DiscountOffice
displayData:
sortBy: default
rows: 1
cols: 3
collapsed: false
hideForGuests: false
items:
- title: DiscountOffice.nl
icon: favicon
url: https://discountoffice.nl/
id: 0_1429_discountofficenl
tags:
- do
- discount
- work
- title: DiscountOffice.be
icon: favicon
url: https://discountoffice.be/
id: 1_1429_discountofficebe
tags:
- do
- discount
- work
- title: Admin NL
icon: favicon
url: https://discountoffice.nl/administrator
id: 2_1429_adminnl
tags:
- do
- discount
- work
- title: Admin BE
icon: favicon
url: https://discountoffice.be/administrator
id: 3_1429_adminbe
tags:
- do
- discount
- work
- title: Subsites
icon: favicon
url: https://elastomappen.nl
id: 4_1429_subsites
tags:
- do
- discount
- work
- title: Proxmox
icon: http://mennos-server:4000/assets/proxmox.svg
url: https://www.transip.nl/cp/vps/prm/350680/
id: 5_1429_proxmox
tags:
- do
- discount
- work
- title: Transip
icon: favicon
url: https://www.transip.nl/cp/vps/prm/350680/
id: 6_1429_transip
tags:
- do
- discount
- work
- title: Kibana
icon: http://mennos-server:4000/assets/kibana.svg
url: http://go/kibana
id: 7_1429_kibana
tags:
- do
- discount
- work
appConfig:
layout: auto
iconSize: large
theme: nord
startingView: default
defaultOpeningMethod: sametab
statusCheck: false
statusCheckInterval: 0
routingMode: history
enableMultiTasking: false
widgetsAlwaysUseProxy: false
webSearch:
disableWebSearch: false
searchEngine: https://kagi.com/search?q=
openingMethod: newtab
searchBangs: {}
enableFontAwesome: true
enableMaterialDesignIcons: false
hideComponents:
hideHeading: false
hideNav: true
hideSearch: false
hideSettings: true
hideFooter: false
auth:
enableGuestAccess: false
users: []
enableOidc: false
oidc:
adminRole: "false"
adminGroup: "false"
enableHeaderAuth: false
headerAuth:
userHeader: REMOTE_USER
proxyWhitelist: []
enableKeycloak: false
showSplashScreen: false
preventWriteToDisk: false
preventLocalSave: false
disableConfiguration: false
disableConfigurationForNonAdmin: false
allowConfigEdit: true
enableServiceWorker: false
disableContextMenu: false
disableUpdateChecks: false
disableSmartSort: false
enableErrorReporting: false

View File

@@ -0,0 +1,44 @@
---
- name: Deploy Dashy service
block:
- name: Set Dashy directories
ansible.builtin.set_fact:
dashy_service_dir: "{{ ansible_env.HOME }}/.services/dashy"
dashy_data_dir: "/mnt/services/dashy"
- name: Create Dashy directory
ansible.builtin.file:
path: "{{ dashy_service_dir }}"
state: directory
mode: "0755"
- name: Create Dashy data directory
ansible.builtin.file:
path: "{{ dashy_data_dir }}"
state: directory
mode: "0755"
- name: Deploy Dashy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ dashy_service_dir }}/docker-compose.yml"
mode: "0644"
register: dashy_compose
- name: Deploy Dashy config.yml
ansible.builtin.template:
src: conf.yml.j2
dest: "{{ dashy_data_dir }}/conf.yml"
mode: "0644"
register: dashy_config
- name: Stop Dashy service
ansible.builtin.command: docker compose -f "{{ dashy_service_dir }}/docker-compose.yml" down --remove-orphans
when: dashy_compose.changed
- name: Start Dashy service
ansible.builtin.command: docker compose -f "{{ dashy_service_dir }}/docker-compose.yml" up -d
when: dashy_compose.changed
tags:
- services
- dashy

View File

@@ -0,0 +1,21 @@
services:
dashy:
image: lissy93/dashy:latest
restart: unless-stopped
ports:
- 4000:8080
volumes:
- {{dashy_data_dir}}/:/app/user-data
networks:
- caddy_network
extra_hosts:
- host.docker.internal:host-gateway
deploy:
resources:
limits:
memory: 2G
networks:
caddy_network:
external: true
name: caddy_default

View File

@@ -6,12 +6,11 @@ services:
cap_add:
- NET_ADMIN
networks:
- arr-stack-net
- arr_stack_net
ports:
- 6881:6881
- 6881:6881/udp
- 8085:8085 # Qbittorrent
- 7788:8080 # Sabnzbd
devices:
- /dev/net/tun:/dev/net/tun
volumes:
@@ -24,6 +23,10 @@ services:
- OPENVPN_PASSWORD={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='OPENVPN_PASSWORD') }}
- SERVER_COUNTRIES={{ lookup('community.general.onepassword', 'Gluetun', vault='Dotfiles', field='SERVER_COUNTRIES') }}
restart: always
deploy:
resources:
limits:
memory: 512M
sabnzbd:
image: lscr.io/linuxserver/sabnzbd:latest
@@ -33,15 +36,18 @@ services:
- TZ=Europe/Amsterdam
volumes:
- {{ downloaders_data_dir }}/sabnzbd-config:/config
- {{ object_storage_dir }}:/storage
- {{ local_data_dir }}:{{ local_data_dir }}
restart: unless-stopped
network_mode: "service:gluetun"
depends_on:
gluetun:
condition: service_healthy
ports:
- 7788:8080
deploy:
resources:
limits:
memory: 1G
qbittorrent:
image: lscr.io/linuxserver/qbittorrent
network_mode: "service:gluetun"
environment:
- PUID=1000
- PGID=100
@@ -49,14 +55,17 @@ services:
- TZ=Europe/Amsterdam
volumes:
- {{ downloaders_data_dir }}/qbit-config:/config
- {{ object_storage_dir }}:/storage
restart: always
network_mode: "service:gluetun"
- {{ local_data_dir }}:{{ local_data_dir }}
depends_on:
gluetun:
condition: service_healthy
restart: always
deploy:
resources:
limits:
memory: 1G
networks:
arr-stack-net:
arr_stack_net:
external: true
name: arr-stack_arr-stack-net
name: arr_stack_net

View File

@@ -3,9 +3,9 @@
block:
- name: Set Downloaders directories
ansible.builtin.set_fact:
object_storage_dir: "/mnt/object_storage"
downloaders_service_dir: "{{ ansible_env.HOME }}/services/downloaders"
downloaders_data_dir: "/mnt/object_storage/services/downloaders"
local_data_dir: "/mnt/data"
downloaders_service_dir: "{{ ansible_env.HOME }}/.services/downloaders"
downloaders_data_dir: "/mnt/services/downloaders"
- name: Create Downloaders directory
ansible.builtin.file:
@@ -13,6 +13,15 @@
state: directory
mode: "0755"
- name: Create Downloaders service directory
ansible.builtin.file:
path: "{{ downloaders_service_dir }}"
state: directory
mode: "0755"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true
- name: Deploy Downloaders docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
@@ -20,6 +29,12 @@
mode: "0644"
register: downloaders_compose
- name: Ensure arr_stack_net Docker network exists
community.docker.docker_network:
name: arr_stack_net
driver: bridge
state: present
- name: Stop Downloaders service
ansible.builtin.command: docker compose -f "{{ downloaders_service_dir }}/docker-compose.yml" down --remove-orphans
when: downloaders_compose.changed

View File

@@ -4,13 +4,17 @@ services:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 8585:8080
- 8800:8080
environment:
- DOZZLE_NO_ANALYTICS=true
restart: unless-stopped
networks:
- dozzle-net
- caddy_network
deploy:
resources:
limits:
memory: 256M
networks:
dozzle-net:

View File

@@ -3,8 +3,8 @@
block:
- name: Set Dozzle directories
ansible.builtin.set_fact:
dozzle_service_dir: "{{ ansible_env.HOME }}/services/dozzle"
dozzle_data_dir: "/mnt/object_storage/services/dozzle"
dozzle_service_dir: "{{ ansible_env.HOME }}/.services/dozzle"
dozzle_data_dir: "/mnt/services/dozzle"
- name: Create Dozzle directory
ansible.builtin.file:

View File

@@ -16,6 +16,10 @@ services:
-a /opt/echoip/GeoLite2-ASN.mmdb
-c /opt/echoip/GeoLite2-City.mmdb
-f /opt/echoip/GeoLite2-Country.mmdb
deploy:
resources:
limits:
memory: 128M
networks:
caddy_network:

View File

@@ -0,0 +1,169 @@
---
- name: Deploy EchoIP service
block:
- name: Set EchoIP directories
ansible.builtin.set_fact:
echoip_service_dir: "{{ ansible_env.HOME }}/.services/echoip"
echoip_data_dir: "/mnt/services/echoip"
maxmind_account_id:
"{{ lookup('community.general.onepassword', 'MaxMind',
vault='Dotfiles', field='account_id') | regex_replace('\\s+', '') }}"
maxmind_license_key:
"{{ lookup('community.general.onepassword', 'MaxMind',
vault='Dotfiles', field='license_key') | regex_replace('\\s+', '') }}"
# Requires: gather_facts: true in playbook
- name: Check last update marker file
ansible.builtin.stat:
path: "{{ echoip_data_dir }}/.last_update"
register: echoip_update_marker
- name: Determine if update is needed (older than 24h or missing)
ansible.builtin.set_fact:
update_needed: "{{ (not echoip_update_marker.stat.exists) or ((ansible_date_time.epoch | int) - (echoip_update_marker.stat.mtime | default(0) | int) > 86400) }}"
- name: Create EchoIP directory
ansible.builtin.file:
path: "{{ echoip_service_dir }}"
state: directory
mode: "0755"
- name: Create EchoIP data directory
ansible.builtin.file:
path: "{{ echoip_data_dir }}"
state: directory
mode: "0755"
# Only update databases if needed (max once per 24h)
- block:
# Touch the marker file BEFORE attempting download to prevent repeated attempts on failure
- name: Update last update marker file (pre-download)
ansible.builtin.file:
path: "{{ echoip_data_dir }}/.last_update"
state: touch
# Create directories for extracted databases
- name: Create directory for ASN database extraction
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-ASN"
state: directory
mode: "0755"
- name: Create directory for City database extraction
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-City"
state: directory
mode: "0755"
- name: Create directory for Country database extraction
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-Country"
state: directory
mode: "0755"
# Download all databases
- name: Download GeoLite2 ASN database
ansible.builtin.get_url:
url: "https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-ASN&license_key={{ maxmind_license_key }}&suffix=tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-ASN.tar.gz"
mode: "0644"
- name: Download GeoLite2 City database
ansible.builtin.get_url:
url: "https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-City&license_key={{ maxmind_license_key }}&suffix=tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-City.tar.gz"
mode: "0644"
- name: Download GeoLite2 Country database
ansible.builtin.get_url:
url: "https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-Country&license_key={{ maxmind_license_key }}&suffix=tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-Country.tar.gz"
mode: "0644"
# Extract all databases
- name: Extract GeoLite2 ASN database
ansible.builtin.unarchive:
src: "{{ echoip_data_dir }}/GeoLite2-ASN.tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-ASN"
remote_src: true
register: asn_extracted
- name: Extract GeoLite2 City database
ansible.builtin.unarchive:
src: "{{ echoip_data_dir }}/GeoLite2-City.tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-City"
remote_src: true
register: city_extracted
- name: Extract GeoLite2 Country database
ansible.builtin.unarchive:
src: "{{ echoip_data_dir }}/GeoLite2-Country.tar.gz"
dest: "{{ echoip_data_dir }}/GeoLite2-Country"
remote_src: true
register: country_extracted
# Move all databases to the correct locations
- name: Move ASN database to correct location
ansible.builtin.command:
cmd: "find {{ echoip_data_dir }}/GeoLite2-ASN -name GeoLite2-ASN.mmdb -exec mv {} {{ echoip_data_dir }}/GeoLite2-ASN.mmdb \\;"
when: asn_extracted.changed
- name: Move City database to correct location
ansible.builtin.command:
cmd: "find {{ echoip_data_dir }}/GeoLite2-City -name GeoLite2-City.mmdb -exec mv {} {{ echoip_data_dir }}/GeoLite2-City.mmdb \\;"
when: city_extracted.changed
- name: Move Country database to correct location
ansible.builtin.command:
cmd: "find {{ echoip_data_dir }}/GeoLite2-Country -name GeoLite2-Country.mmdb -exec mv {} {{ echoip_data_dir }}/GeoLite2-Country.mmdb \\;"
when: country_extracted.changed
# Clean up unnecessary files
- name: Remove downloaded tar.gz files
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-ASN.tar.gz"
state: absent
- name: Remove extracted ASN folder
ansible.builtin.command:
cmd: "rm -rf {{ echoip_data_dir }}/GeoLite2-ASN"
- name: Remove downloaded City tar.gz file
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-City.tar.gz"
state: absent
- name: Remove extracted City folder
ansible.builtin.command:
cmd: "rm -rf {{ echoip_data_dir }}/GeoLite2-City"
- name: Remove downloaded Country tar.gz file
ansible.builtin.file:
path: "{{ echoip_data_dir }}/GeoLite2-Country.tar.gz"
state: absent
- name: Remove extracted Country folder
ansible.builtin.command:
cmd: "rm -rf {{ echoip_data_dir }}/GeoLite2-Country"
# Update the marker file (no longer needed here, already touched before download)
when: update_needed
# Deploy and restart the EchoIP service
- name: Deploy EchoIP docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ echoip_service_dir }}/docker-compose.yml"
mode: "0644"
register: echoip_compose
- name: Stop EchoIP service
ansible.builtin.command: docker compose -f "{{ echoip_service_dir }}/docker-compose.yml" down --remove-orphans
when: echoip_compose.changed
- name: Start EchoIP service
ansible.builtin.command: docker compose -f "{{ echoip_service_dir }}/docker-compose.yml" up -d
when: echoip_compose.changed
tags:
- services
- echoip

View File

@@ -19,6 +19,10 @@ services:
networks:
- factorio
- caddy_network
deploy:
resources:
limits:
memory: 2G
networks:
factorio:

View File

@@ -3,8 +3,8 @@
block:
- name: Set Factorio directories
ansible.builtin.set_fact:
factorio_service_dir: "{{ ansible_env.HOME }}/services/factorio"
factorio_data_dir: "/mnt/object_storage/services/factorio"
factorio_service_dir: "{{ ansible_env.HOME }}/.services/factorio"
factorio_data_dir: "/mnt/services/factorio"
- name: Create Factorio directory
ansible.builtin.file:

View File

@@ -15,6 +15,10 @@ services:
networks:
- gitea
- caddy_network
deploy:
resources:
limits:
memory: 1G
postgres:
image: postgres:15-alpine
@@ -29,6 +33,10 @@ services:
- {{gitea_data_dir}}/postgres:/var/lib/postgresql/data
networks:
- gitea
deploy:
resources:
limits:
memory: 1G
act_runner:
image: gitea/act_runner:latest
@@ -46,6 +54,10 @@ services:
restart: always
networks:
- gitea
deploy:
resources:
limits:
memory: 2G
networks:
gitea:

View File

@@ -3,8 +3,8 @@
block:
- name: Set Gitea directories
ansible.builtin.set_fact:
gitea_data_dir: "/mnt/object_storage/services/gitea"
gitea_service_dir: "{{ ansible_env.HOME }}/services/gitea"
gitea_data_dir: "/mnt/services/gitea"
gitea_service_dir: "{{ ansible_env.HOME }}/.services/gitea"
- name: Create Gitea directories
ansible.builtin.file:

View File

@@ -8,3 +8,7 @@ services:
volumes:
- {{ golink_data_dir }}:/home/nonroot
restart: "unless-stopped"
deploy:
resources:
limits:
memory: 256M

View File

@@ -3,8 +3,8 @@
block:
- name: Set GoLink directories
ansible.builtin.set_fact:
golink_data_dir: "/mnt/object_storage/services/golink"
golink_service_dir: "{{ ansible_env.HOME }}/services/golink"
golink_data_dir: "/mnt/services/golink"
golink_service_dir: "{{ ansible_env.HOME }}/.services/golink"
- name: Create GoLink directories
ansible.builtin.file:

View File

@@ -15,3 +15,7 @@ services:
network_mode: host
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
deploy:
resources:
limits:
memory: 2G

View File

@@ -4,7 +4,7 @@
- name: Set Home Assistant directories
ansible.builtin.set_fact:
homeassistant_data_dir: "/mnt/services/homeassistant"
homeassistant_service_dir: "{{ ansible_env.HOME }}/services/homeassistant"
homeassistant_service_dir: "{{ ansible_env.HOME }}/.services/homeassistant"
- name: Create Home Assistant directories
ansible.builtin.file:

View File

@@ -15,24 +15,49 @@ services:
- TZ=Europe/Amsterdam
- PUID=1000
- PGID=100
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
restart: unless-stopped
healthcheck:
disable: false
networks:
- immich
- caddy_network
runtime: nvidia
deploy:
resources:
limits:
memory: 4G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
volumes:
- model-cache:/cache
env_file:
- .env
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
restart: unless-stopped
healthcheck:
disable: false
networks:
- immich
runtime: nvidia
deploy:
resources:
limits:
memory: 8G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
redis:
container_name: immich_redis
@@ -42,6 +67,10 @@ services:
restart: unless-stopped
networks:
- immich
deploy:
resources:
limits:
memory: 1G
database:
container_name: immich_postgres
@@ -54,7 +83,6 @@ services:
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
# Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
- {{ immich_database_dir }}:/var/lib/postgresql/data
healthcheck:
test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1; Chksum="$$(psql --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
@@ -80,6 +108,10 @@ services:
restart: unless-stopped
networks:
- immich
deploy:
resources:
limits:
memory: 2G
volumes:
model-cache:

View File

@@ -3,9 +3,9 @@
block:
- name: Set Immich directories
ansible.builtin.set_fact:
immich_data_dir: "/mnt/object_storage/photos/immich-library"
immich_database_dir: "/mnt/object_storage/services/immich/postgres"
immich_service_dir: "{{ ansible_env.HOME }}/services/immich"
immich_data_dir: "/mnt/data/photos/immich-library"
immich_database_dir: "/mnt/services/immich/postgres"
immich_service_dir: "{{ ansible_env.HOME }}/.services/immich"
- name: Create Immich directories
ansible.builtin.file:

View File

@@ -0,0 +1,15 @@
services:
necesse:
image: brammys/necesse-server
container_name: necesse
restart: unless-stopped
ports:
- "14159:14159/udp"
environment:
- MOTD=StarDebris' Server!
- PASSWORD=2142
- SLOTS=4
- PAUSE=1
volumes:
- {{ necesse_data_dir }}/saves:/necesse/saves
- {{ necesse_data_dir }}/logs:/necesse/logs

View File

@@ -0,0 +1,41 @@
---
- name: Deploy Necesse service
block:
- name: Set Necesse directories
ansible.builtin.set_fact:
necesse_service_dir: "{{ ansible_env.HOME }}/.services/necesse"
necesse_data_dir: "/mnt/services/necesse"
- name: Create Necesse service directory
ansible.builtin.file:
path: "{{ necesse_service_dir }}"
state: directory
mode: "0755"
- name: Create Necesse data directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ necesse_data_dir }}"
- "{{ necesse_data_dir }}/saves"
- "{{ necesse_data_dir }}/logs"
- name: Deploy Necesse docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ necesse_service_dir }}/docker-compose.yml"
mode: "0644"
register: necesse_compose
- name: Stop Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" down --remove-orphans
when: necesse_compose.changed
- name: Start Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" up -d
when: necesse_compose.changed
tags:
- services
- necesse

View File

@@ -0,0 +1,73 @@
services:
nextcloud:
image: nextcloud
container_name: nextcloud
restart: unless-stopped
networks:
- nextcloud
- caddy_network
depends_on:
- nextclouddb
- redis
ports:
- 8081:80
volumes:
- {{ nextcloud_data_dir }}/nextcloud/html:/var/www/html
- {{ nextcloud_data_dir }}/nextcloud/custom_apps:/var/www/html/custom_apps
- {{ nextcloud_data_dir }}/nextcloud/config:/var/www/html/config
- {{ nextcloud_data_dir }}/nextcloud/data:/var/www/html/data
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }}
- MYSQL_HOST=nextclouddb
- REDIS_HOST=redis
deploy:
resources:
limits:
memory: 2G
nextclouddb:
image: mariadb:11.4.7
container_name: nextcloud-db
restart: unless-stopped
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
networks:
- nextcloud
volumes:
- {{ nextcloud_data_dir }}/database:/var/lib/mysql
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_PASSWORD={{ lookup('community.general.onepassword', 'Nextcloud', vault='Dotfiles', field='MYSQL_NEXTCLOUD_PASSWORD') }}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
deploy:
resources:
limits:
memory: 1G
redis:
image: redis:alpine
container_name: redis
volumes:
- {{ nextcloud_data_dir }}/redis:/data
networks:
- nextcloud
deploy:
resources:
limits:
memory: 512M
networks:
nextcloud:
name: nextcloud
driver: bridge
caddy_network:
name: caddy_default
external: true

View File

@@ -0,0 +1,31 @@
---
- name: Deploy Nextcloud service
block:
- name: Set Nextcloud directories
ansible.builtin.set_fact:
nextcloud_service_dir: "{{ ansible_env.HOME }}/.services/nextcloud"
nextcloud_data_dir: "/mnt/services/nextcloud"
- name: Create Nextcloud directory
ansible.builtin.file:
path: "{{ nextcloud_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Nextcloud docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ nextcloud_service_dir }}/docker-compose.yml"
mode: "0644"
register: nextcloud_compose
- name: Stop Nextcloud service
ansible.builtin.command: docker compose -f "{{ nextcloud_service_dir }}/docker-compose.yml" down --remove-orphans
when: nextcloud_compose.changed
- name: Start Nextcloud service
ansible.builtin.command: docker compose -f "{{ nextcloud_service_dir }}/docker-compose.yml" up -d
when: nextcloud_compose.changed
tags:
- services
- nextcloud

View File

@@ -0,0 +1,29 @@
services:
plex:
image: lscr.io/linuxserver/plex:latest
network_mode: host
restart: unless-stopped
runtime: nvidia
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- VERSION=docker
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
volumes:
- {{ plex_data_dir }}/config:/config
- {{ plex_data_dir }}/transcode:/transcode
- /mnt/data/movies:/movies
- /mnt/data/tvshows:/tvshows
- /mnt/object_storage/tvshows:/tvshows_slow
- /mnt/data/music:/music
deploy:
resources:
limits:
memory: 4G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@@ -0,0 +1,36 @@
---
- name: Deploy Plex service
block:
- name: Set Plex directories
ansible.builtin.set_fact:
plex_data_dir: "/mnt/services/plex"
plex_service_dir: "{{ ansible_env.HOME }}/.services/plex"
- name: Create Plex directories
ansible.builtin.file:
path: "{{ plex_dir }}"
state: directory
mode: "0755"
loop:
- "{{ plex_data_dir }}"
- "{{ plex_service_dir }}"
loop_control:
loop_var: plex_dir
- name: Deploy Plex docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ plex_service_dir }}/docker-compose.yml"
mode: "0644"
register: plex_compose
- name: Stop Plex service
ansible.builtin.command: docker compose -f "{{ plex_service_dir }}/docker-compose.yml" down --remove-orphans
when: plex_compose.changed
- name: Start Plex service
ansible.builtin.command: docker compose -f "{{ plex_service_dir }}/docker-compose.yml" up -d
when: plex_compose.changed
tags:
- services
- plex

View File

@@ -0,0 +1,300 @@
;<?php http_response_code(403); /*
; config file for PrivateBin
;
; An explanation of each setting can be find online at https://github.com/PrivateBin/PrivateBin/wiki/Configuration.
[main]
; (optional) set a project name to be displayed on the website
; name = "PrivateBin"
; The full URL, with the domain name and directories that point to the
; PrivateBin files, including an ending slash (/). This URL is essential to
; allow Opengraph images to be displayed on social networks.
basepath = "https://bin.mvl.sh/"
; enable or disable the discussion feature, defaults to true
discussion = false
; preselect the discussion feature, defaults to false
opendiscussion = false
; enable or disable the display of dates & times in the comments, defaults to true
; Note that internally the creation time will still get tracked in order to sort
; the comments by creation time, but you can choose not to display them.
; discussiondatedisplay = false
; enable or disable the password feature, defaults to true
password = true
; enable or disable the file upload feature, defaults to false
fileupload = false
; preselect the burn-after-reading feature, defaults to false
burnafterreadingselected = false
; which display mode to preselect by default, defaults to "plaintext"
; make sure the value exists in [formatter_options]
defaultformatter = "plaintext"
; (optional) set a syntax highlighting theme, as found in css/prettify/
; syntaxhighlightingtheme = "sons-of-obsidian"
; size limit per paste or comment in bytes, defaults to 10 Mebibytes
sizelimit = 10485760
; by default PrivateBin use "bootstrap" template (tpl/bootstrap.php).
; Optionally you can enable the template selection menu, which uses
; a session cookie to store the choice until the browser is closed.
templateselection = false
; List of available for selection templates when "templateselection" option is enabled
availabletemplates[] = "bootstrap5"
availabletemplates[] = "bootstrap"
availabletemplates[] = "bootstrap-page"
availabletemplates[] = "bootstrap-dark"
availabletemplates[] = "bootstrap-dark-page"
availabletemplates[] = "bootstrap-compact"
availabletemplates[] = "bootstrap-compact-page"
; set the template your installs defaults to, defaults to "bootstrap" (tpl/bootstrap.php), also
; bootstrap variants: "bootstrap-dark", "bootstrap-compact", "bootstrap-page",
; which can be combined with "-dark" and "-compact" for "bootstrap-dark-page",
; "bootstrap-compact-page" and finally "bootstrap5" (tpl/bootstrap5.php) - previews at:
; https://privatebin.info/screenshots.html
; template = "bootstrap"
; (optional) info text to display
; use single, instead of double quotes for HTML attributes
;info = "More information on the <a href='https://privatebin.info/'>project page</a>."
; (optional) notice to display
; notice = "Note: This is a test service: Data may be deleted anytime. Kittens will die if you abuse this service."
; by default PrivateBin will guess the visitors language based on the browsers
; settings. Optionally you can enable the language selection menu, which uses
; a session cookie to store the choice until the browser is closed.
languageselection = false
; set the language your installs defaults to, defaults to English
; if this is set and language selection is disabled, this will be the only language
; languagedefault = "en"
; (optional) URL shortener address to offer after a new paste is created.
; It is suggested to only use this with self-hosted shorteners as this will leak
; the pastes encryption key.
; urlshortener = "https://shortener.example.com/api?link="
; (optional) Let users create a QR code for sharing the paste URL with one click.
; It works both when a new paste is created and when you view a paste.
qrcode = true
; (optional) Let users send an email sharing the paste URL with one click.
; It works both when a new paste is created and when you view a paste.
; email = true
; (optional) IP based icons are a weak mechanism to detect if a comment was from
; a different user when the same username was used in a comment. It might get
; used to get the IP of a comment poster if the server salt is leaked and a
; SHA512 HMAC rainbow table is generated for all (relevant) IPs.
; Can be set to one these values:
; "none" / "identicon" / "jdenticon" (default) / "vizhash".
; icon = "none"
; Content Security Policy headers allow a website to restrict what sources are
; allowed to be accessed in its context. You need to change this if you added
; custom scripts from third-party domains to your templates, e.g. tracking
; scripts or run your site behind certain DDoS-protection services.
; Check the documentation at https://content-security-policy.com/
; Notes:
; - If you use the bootstrap5 theme, you must change default-src to 'self' to
; enable display of the svg icons
; - By default this disallows to load images from third-party servers, e.g. when
; they are embedded in pastes. If you wish to allow that, you can adjust the
; policy here. See https://github.com/PrivateBin/PrivateBin/wiki/FAQ#why-does-not-it-load-embedded-images
; for details.
; - The 'wasm-unsafe-eval' is used to enable webassembly support (used for zlib
; compression). You can remove it if compression doesn't need to be supported.
; - The 'unsafe-inline' style-src is used by Chrome when displaying PDF previews
; and can be omitted if attachment upload is disabled (which is the default).
; See https://issues.chromium.org/issues/343754409
; - To allow displaying PDF previews in Firefox or Chrome, sandboxing must also
; get turned off. The following CSP allows PDF previews:
; cspheader = "default-src 'none'; base-uri 'self'; form-action 'none'; manifest-src 'self'; connect-src * blob:; script-src 'self' 'wasm-unsafe-eval'; style-src 'self' 'unsafe-inline'; font-src 'self'; frame-ancestors 'none'; frame-src blob:; img-src 'self' data: blob:; media-src blob:; object-src blob:"
;
; The recommended and default used CSP is:
; cspheader = "default-src 'none'; base-uri 'self'; form-action 'none'; manifest-src 'self'; connect-src * blob:; script-src 'self' 'wasm-unsafe-eval'; style-src 'self'; font-src 'self'; frame-ancestors 'none'; frame-src blob:; img-src 'self' data: blob:; media-src blob:; object-src blob:; sandbox allow-same-origin allow-scripts allow-forms allow-modals allow-downloads"
; stay compatible with PrivateBin Alpha 0.19, less secure
; if enabled will use base64.js version 1.7 instead of 2.1.9 and sha1 instead of
; sha256 in HMAC for the deletion token
; zerobincompatibility = false
; Enable or disable the warning message when the site is served over an insecure
; connection (insecure HTTP instead of HTTPS), defaults to true.
; Secure transport methods like Tor and I2P domains are automatically whitelisted.
; It is **strongly discouraged** to disable this.
; See https://github.com/PrivateBin/PrivateBin/wiki/FAQ#why-does-it-show-me-an-error-about-an-insecure-connection for more information.
; httpwarning = true
; Pick compression algorithm or disable it. Only applies to pastes/comments
; created after changing the setting.
; Can be set to one these values: "none" / "zlib" (default).
; compression = "zlib"
[expire]
; expire value that is selected per default
; make sure the value exists in [expire_options]
default = "1week"
[expire_options]
; Set each one of these to the number of seconds in the expiration period,
; or 0 if it should never expire
5min = 300
10min = 600
1hour = 3600
1day = 86400
1week = 604800
; Well this is not *exactly* one month, it's 30 days:
1month = 2592000
1year = 31536000
never = 0
[formatter_options]
; Set available formatters, their order and their labels
plaintext = "Plain Text"
syntaxhighlighting = "Source Code"
markdown = "Markdown"
[traffic]
; time limit between calls from the same IP address in seconds
; Set this to 0 to disable rate limiting.
limit = 10
; (optional) Set IPs addresses (v4 or v6) or subnets (CIDR) which are exempted
; from the rate-limit. Invalid IPs will be ignored. If multiple values are to
; be exempted, the list needs to be comma separated. Leave unset to disable
; exemptions.
; exempted = "1.2.3.4,10.10.10/24"
; (optional) If you want only some source IP addresses (v4 or v6) or subnets
; (CIDR) to be allowed to create pastes, set these here. Invalid IPs will be
; ignored. If multiple values are to be exempted, the list needs to be comma
; separated. Leave unset to allow anyone to create pastes.
; creators = "1.2.3.4,10.10.10/24"
; (optional) if your website runs behind a reverse proxy or load balancer,
; set the HTTP header containing the visitors IP address, i.e. X_FORWARDED_FOR
; header = "X_FORWARDED_FOR"
[purge]
; minimum time limit between two purgings of expired pastes, it is only
; triggered when pastes are created
; Set this to 0 to run a purge every time a paste is created.
limit = 300
; maximum amount of expired pastes to delete in one purge
; Set this to 0 to disable purging. Set it higher, if you are running a large
; site
batchsize = 10
[model]
; name of data model class to load and directory for storage
; the default model "Filesystem" stores everything in the filesystem
class = Filesystem
[model_options]
dir = PATH "data"
;[model]
; example of a Google Cloud Storage configuration
;class = GoogleCloudStorage
;[model_options]
;bucket = "my-private-bin"
;prefix = "pastes"
;uniformacl = false
;[model]
; example of DB configuration for MySQL
;class = Database
;[model_options]
;dsn = "mysql:host=localhost;dbname=privatebin;charset=UTF8"
;tbl = "privatebin_" ; table prefix
;usr = "privatebin"
;pwd = "Z3r0P4ss"
;opt[12] = true ; PDO::ATTR_PERSISTENT
;[model]
; example of DB configuration for SQLite
;class = Database
;[model_options]
;dsn = "sqlite:" PATH "data/db.sq3"
;usr = null
;pwd = null
;opt[12] = true ; PDO::ATTR_PERSISTENT
;[model]
; example of DB configuration for PostgreSQL
;class = Database
;[model_options]
;dsn = "pgsql:host=localhost;dbname=privatebin"
;tbl = "privatebin_" ; table prefix
;usr = "privatebin"
;pwd = "Z3r0P4ss"
;opt[12] = true ; PDO::ATTR_PERSISTENT
;[model]
; example of S3 configuration for Rados gateway / CEPH
;class = S3Storage
;[model_options]
;region = ""
;version = "2006-03-01"
;endpoint = "https://s3.my-ceph.invalid"
;use_path_style_endpoint = true
;bucket = "my-bucket"
;accesskey = "my-rados-user"
;secretkey = "my-rados-pass"
;[model]
; example of S3 configuration for AWS
;class = S3Storage
;[model_options]
;region = "eu-central-1"
;version = "latest"
;bucket = "my-bucket"
;accesskey = "access key id"
;secretkey = "secret access key"
;[model]
; example of S3 configuration for AWS using its SDK default credential provider chain
; if relying on environment variables, the AWS SDK will look for the following:
; - AWS_ACCESS_KEY_ID
; - AWS_SECRET_ACCESS_KEY
; - AWS_SESSION_TOKEN (if needed)
; for more details, see https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials.html#default-credential-chain
;class = S3Storage
;[model_options]
;region = "eu-central-1"
;version = "latest"
;bucket = "my-bucket"
;[yourls]
; When using YOURLS as a "urlshortener" config item:
; - By default, "urlshortener" will point to the YOURLS API URL, with or without
; credentials, and will be visible in public on the PrivateBin web page.
; Only use this if you allow short URL creation without credentials.
; - Alternatively, using the parameters in this section ("signature" and
; "apiurl"), "urlshortener" needs to point to the base URL of your PrivateBin
; instance with "?shortenviayourls&link=" appended. For example:
; urlshortener = "${basepath}?shortenviayourls&link="
; This URL will in turn call YOURLS on the server side, using the URL from
; "apiurl" and the "access signature" from the "signature" parameters below.
; (optional) the "signature" (access key) issued by YOURLS for the using account
; signature = ""
; (optional) the URL of the YOURLS API, called to shorten a PrivateBin URL
; apiurl = "https://yourls.example.com/yourls-api.php"
;[sri]
; Subresource integrity (SRI) hashes used in template files. Uncomment and set
; these for all js files used. See:
; https://github.com/PrivateBin/PrivateBin/wiki/FAQ#user-content-how-to-make-privatebin-work-when-i-have-changed-some-javascript-files
;js/privatebin.js = "sha512-[…]"

View File

@@ -0,0 +1,33 @@
services:
privatebin:
image: privatebin/nginx-fpm-alpine:latest
container_name: privatebin
restart: always
read_only: true
user: "1000:1000"
ports:
- "8585:8080"
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Amsterdam
volumes:
- {{ privatebin_data_dir }}:/srv/data
- {{ privatebin_service_dir }}/conf.php:/srv/cfg/conf.php:ro
healthcheck:
test: ["CMD-SHELL", "nc -z 127.0.0.1 8080 || exit 1"]
interval: 10s
timeout: 5s
retries: 3
start_period: 90s
networks:
- caddy_network
deploy:
resources:
limits:
memory: 256M
networks:
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,42 @@
---
- name: Deploy PrivateBin service
block:
- name: Set PrivateBin directories
ansible.builtin.set_fact:
privatebin_data_dir: "/mnt/services/privatebin"
privatebin_service_dir: "{{ ansible_env.HOME }}/.services/privatebin"
- name: Create PrivateBin directories
ansible.builtin.file:
path: "{{ privatebin_dir }}"
state: directory
mode: "0755"
loop:
- "{{ privatebin_data_dir }}"
- "{{ privatebin_service_dir }}"
loop_control:
loop_var: privatebin_dir
- name: Deploy PrivateBin docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ privatebin_service_dir }}/docker-compose.yml"
mode: "0644"
register: privatebin_compose
- name: Deploy PrivateBin conf.php
ansible.builtin.template:
src: conf.php.j2
dest: "{{ privatebin_service_dir }}/conf.php"
mode: "0644"
- name: Stop PrivateBin service
ansible.builtin.command: docker compose -f "{{ privatebin_service_dir }}/docker-compose.yml" down --remove-orphans
when: privatebin_compose.changed
- name: Start PrivateBin service
ansible.builtin.command: docker compose -f "{{ privatebin_service_dir }}/docker-compose.yml" up -d
when: privatebin_compose.changed
tags:
- services
- privatebin

View File

@@ -0,0 +1,17 @@
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
volumes:
- /mnt/services/qdrant:/qdrant/storage
deploy:
resources:
limits:
memory: 2G

View File

@@ -0,0 +1,32 @@
- name: Deploy Qdrant service
tags:
- services
- qdrant
block:
- name: Set Qdrant directories
ansible.builtin.set_fact:
qdrant_service_dir: "{{ ansible_env.HOME }}/.services/qdrant"
qdrant_data_dir: "/mnt/services/qdrant"
- name: Create Qdrant directory
ansible.builtin.file:
path: "{{ qdrant_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Qdrant docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ qdrant_service_dir }}/docker-compose.yml"
mode: "0644"
notify: restart_qdrant
- name: Stop Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
listen: restart_qdrant
- name: Start Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" up -d
changed_when: false
listen: restart_qdrant

View File

@@ -5,7 +5,7 @@ services:
ports:
- "6379:6379"
volumes:
- /mnt/services/redis-data:/data
- /mnt/services/redis:/data
command: ["redis-server", "--appendonly", "yes", "--requirepass", "{{ REDIS_PASSWORD }}"]
environment:
- TZ=Europe/Amsterdam
@@ -17,6 +17,10 @@ services:
start_period: 5s
networks:
- juicefs-network
deploy:
resources:
limits:
memory: 256M
networks:
juicefs-network:

View File

@@ -3,7 +3,7 @@
block:
- name: Set Redis facts
ansible.builtin.set_fact:
redis_service_dir: "{{ ansible_env.HOME }}/services/juicefs-redis"
redis_service_dir: "{{ ansible_env.HOME }}/.services/juicefs-redis"
redis_password: "{{ lookup('community.general.onepassword', 'JuiceFS (Redis)', vault='Dotfiles', field='password') }}"
- name: Create Redis service directory
@@ -34,6 +34,7 @@
register: juicefs_stop
changed_when: juicefs_stop.changed
when: redis_compose.changed and juicefs_service_stat.stat.exists
become: true
- name: List containers that are running
ansible.builtin.command: docker ps -q
@@ -68,6 +69,7 @@
register: juicefs_start
changed_when: juicefs_start.changed
when: juicefs_service_stat.stat.exists
become: true
- name: Restart containers that were stopped
ansible.builtin.command: docker start {{ item }}
@@ -76,5 +78,5 @@
changed_when: docker_restart.rc == 0
when: redis_compose.changed
tags:
- services
- redis
- services
- redis

View File

@@ -0,0 +1,53 @@
# Production Environment Variables
# Copy this to .env and fill in your values
# Database configuration (PostgreSQL)
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USER=sathub
DB_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DB_PASSWORD') }}
DB_NAME=sathub
# Required: JWT secret for token signing
JWT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='JWT_SECRET') }}
# Required: Two-factor authentication encryption key
TWO_FA_ENCRYPTION_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='TWO_FA_ENCRYPTION_KEY') }}
# Email configuration (required for password resets)
SMTP_HOST={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_HOST') }}
SMTP_PORT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PORT') }}
SMTP_USERNAME={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_USERNAME') }}
SMTP_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PASSWORD') }}
SMTP_FROM_EMAIL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_FROM_EMAIL') }}
# MinIO Object Storage configuration
MINIO_ROOT_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_ROOT_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# Basically the same as the above
MINIO_ACCESS_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_SECRET_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# GitHub credentials for Watchtower (auto-updates)
GITHUB_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
GITHUB_PAT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
REPO_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
REPO_PASS={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
# Optional: Override defaults if needed
# GIN_MODE=release (set automatically)
FRONTEND_URL=https://sathub.de
# CORS configuration (optional - additional allowed origins)
CORS_ALLOWED_ORIGINS=https://sathub.de,https://sathub.nl,https://api.sathub.de
# Frontend configuration (optional - defaults are provided)
VITE_API_BASE_URL=https://api.sathub.de
VITE_ALLOWED_HOSTS=sathub.de,sathub.nl
# Discord related messsaging
DISCORD_CLIENT_ID={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_ID') }}
DISCORD_CLIENT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_SECRET') }}
DISCORD_REDIRECT_URI={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_REDIRECT_URL') }}
DISCORD_WEBHOOK_URL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_WEBHOOK_URL') }}

View File

@@ -0,0 +1,182 @@
services:
# Migration service - runs once on stack startup
migrate:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-migrate
restart: "no"
command: ["./main", "auto-migrate"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
networks:
- sathub
depends_on:
- postgres
backend:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-backend
restart: unless-stopped
command: ["./main", "api"]
environment:
- GIN_MODE=release
- FRONTEND_URL=${FRONTEND_URL:-https://sathub.de}
- CORS_ALLOWED_ORIGINS=${CORS_ALLOWED_ORIGINS:-https://sathub.de}
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# Security settings
- JWT_SECRET=${JWT_SECRET}
- TWO_FA_ENCRYPTION_KEY=${TWO_FA_ENCRYPTION_KEY}
# SMTP settings
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
- caddy_network
depends_on:
migrate:
condition: service_completed_successfully
worker:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-worker
restart: unless-stopped
command: ["./main", "worker"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# SMTP settings (needed for notifications)
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
depends_on:
migrate:
condition: service_completed_successfully
postgres:
image: postgres:15-alpine
container_name: sathub-postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER:-sathub}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-sathub}
volumes:
- {{ sathub_data_dir }}/postgres_data:/var/lib/postgresql/data
networks:
- sathub
frontend:
image: ghcr.io/vleeuwenmenno/sathub-frontend/frontend:latest
container_name: sathub-frontend
restart: unless-stopped
environment:
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-https://api.sathub.de}
- VITE_ALLOWED_HOSTS=${VITE_ALLOWED_HOSTS:-sathub.de,sathub.nl}
networks:
- sathub
- caddy_network
minio:
image: minio/minio
container_name: sathub-minio
restart: unless-stopped
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
volumes:
- {{ sathub_data_dir }}/minio_data:/data
command: server /data --console-address :9001
networks:
- sathub
depends_on:
- postgres
watchtower:
image: containrrr/watchtower:latest
container_name: sathub-watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_STOPPED=false
- REPO_USER=${REPO_USER}
- REPO_PASS=${REPO_PASS}
command: --interval 30 --cleanup --include-stopped=false sathub-backend sathub-worker sathub-frontend
networks:
- sathub
networks:
sathub:
driver: bridge
# We assume you're running a Caddy instance in a separate compose file with this network
# If not, you can remove this network and the related depends_on in the services above
# But the stack is designed to run behind a Caddy reverse proxy for SSL termination and routing
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,50 @@
---
- name: Deploy SatHub service
block:
- name: Set SatHub directories
ansible.builtin.set_fact:
sathub_service_dir: "{{ ansible_env.HOME }}/.services/sathub"
sathub_data_dir: "/mnt/services/sathub"
- name: Set SatHub frontend configuration
ansible.builtin.set_fact:
frontend_api_base_url: "https://api.sathub.de"
frontend_allowed_hosts: "sathub.de,sathub.nl"
cors_allowed_origins: "https://sathub.nl,https://api.sathub.de,https://obj.sathub.de"
- name: Create SatHub directory
ansible.builtin.file:
path: "{{ sathub_service_dir }}"
state: directory
mode: "0755"
- name: Create SatHub data directory
ansible.builtin.file:
path: "{{ sathub_data_dir }}"
state: directory
mode: "0755"
- name: Deploy SatHub .env
ansible.builtin.template:
src: .env.j2
dest: "{{ sathub_service_dir }}/.env"
mode: "0644"
register: sathub_env
- name: Deploy SatHub docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ sathub_service_dir }}/docker-compose.yml"
mode: "0644"
register: sathub_compose
- name: Stop SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" down --remove-orphans
when: sathub_compose.changed or sathub_env.changed
- name: Start SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" up -d
when: sathub_compose.changed or sathub_env.changed
tags:
- services
- sathub

View File

@@ -7,7 +7,7 @@
- name: Check service directories existence for disabled services
ansible.builtin.stat:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}"
path: "{{ ansible_env.HOME }}/.services/{{ item.name }}"
register: service_dir_results
loop: "{{ services_to_cleanup }}"
loop_control:
@@ -19,14 +19,14 @@
- name: Check if docker-compose file exists for services to cleanup
ansible.builtin.stat:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}/docker-compose.yml"
path: "{{ ansible_env.HOME }}/.services/{{ item.name }}/docker-compose.yml"
register: compose_file_results
loop: "{{ services_with_dirs }}"
loop_control:
label: "{{ item.name }}"
- name: Stop disabled services with docker-compose files
ansible.builtin.command: docker compose -f "{{ ansible_env.HOME }}/services/{{ item.item.name }}/docker-compose.yml" down --remove-orphans
ansible.builtin.command: docker compose -f "{{ ansible_env.HOME }}/.services/{{ item.item.name }}/docker-compose.yml" down --remove-orphans
loop: "{{ compose_file_results.results | selectattr('stat.exists', 'equalto', true) }}"
loop_control:
label: "{{ item.item.name }}"
@@ -36,7 +36,7 @@
- name: Remove service directories for disabled services
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/services/{{ item.name }}"
path: "{{ ansible_env.HOME }}/.services/{{ item.name }}"
state: absent
loop: "{{ services_with_dirs }}"
loop_control:

View File

@@ -0,0 +1,41 @@
services:
stash:
image: stashapp/stash:latest
container_name: stash
restart: unless-stopped
ports:
- "9999:9999"
environment:
- PUID=1000
- PGID=1000
- STASH_STASH=/data/
- STASH_GENERATED=/generated/
- STASH_METADATA=/metadata/
- STASH_CACHE=/cache/
- STASH_PORT=9999
volumes:
- /etc/localtime:/etc/localtime:ro
## Point this at your collection.
- {{ stash_data_dir }}:/data
## Keep configs, scrapers, and plugins here.
- {{ stash_config_dir }}/config:/root/.stash
## This is where your stash's metadata lives
- {{ stash_config_dir }}/metadata:/metadata
## Any other cache content.
- {{ stash_config_dir }}/cache:/cache
## Where to store binary blob data (scene covers, images)
- {{ stash_config_dir }}/blobs:/blobs
## Where to store generated content (screenshots,previews,transcodes,sprites)
- {{ stash_config_dir }}/generated:/generated
networks:
- caddy_network
deploy:
resources:
limits:
memory: 2G
networks:
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,25 @@
---
services:
tautulli:
image: lscr.io/linuxserver/tautulli:latest
container_name: tautulli
environment:
- PUID=1000
- PGID=100
- TZ=Etc/Amsterdam
volumes:
- {{ tautulli_data_dir }}:/config
ports:
- 8181:8181
restart: unless-stopped
networks:
- caddy_network
deploy:
resources:
limits:
memory: 512M
networks:
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,36 @@
---
- name: Deploy Tautulli service
block:
- name: Set Tautulli directories
ansible.builtin.set_fact:
tautulli_data_dir: "{{ '/mnt/services/tautulli' }}"
tautulli_service_dir: "{{ ansible_env.HOME }}/.services/tautulli"
- name: Create Tautulli directories
ansible.builtin.file:
path: "{{ tautulli_dir }}"
state: directory
mode: "0755"
loop:
- "{{ tautulli_data_dir }}"
- "{{ tautulli_service_dir }}"
loop_control:
loop_var: tautulli_dir
- name: Deploy Tautulli docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ tautulli_service_dir }}/docker-compose.yml"
mode: "0644"
register: tautulli_compose
- name: Stop Tautulli service
ansible.builtin.command: docker compose -f "{{ tautulli_service_dir }}/docker-compose.yml" down --remove-orphans
when: tautulli_compose.changed
- name: Start Tautulli service
ansible.builtin.command: docker compose -f "{{ tautulli_service_dir }}/docker-compose.yml" up -d
when: tautulli_compose.changed
tags:
- services
- tautulli

View File

@@ -3,13 +3,16 @@ services:
image: linuxserver/unifi-network-application:latest
restart: unless-stopped
ports:
- "8080:8080" # Device communication
- "8443:8443" # Controller GUI / API
- "3478:3478/udp" # STUN
- "8080:8080" # Device communication
- "8443:8443" # Controller GUI / API
- "3478:3478/udp" # STUN
- "10001:10001/udp" # AP discovery
- "8880:8880" # HTTP portal redirect (guest hotspot)
- "8843:8843" # HTTPS portal redirect (guest hotspot)
- "6789:6789" # Mobile speed test (optional)
environment:
- PUID=1000
- PGID=1000
- PGID=100
- TZ=Europe/Amsterdam
- MONGO_USER=unifi
- MONGO_PASS=unifi
@@ -26,6 +29,10 @@ services:
- caddy_network
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
deploy:
resources:
limits:
memory: 1G
unifi-db:
image: mongo:6.0
@@ -45,6 +52,10 @@ services:
- unifi-network
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
deploy:
resources:
limits:
memory: 1G
networks:
unifi-network:

View File

@@ -3,8 +3,8 @@
block:
- name: Set Unifi Network App directories
ansible.builtin.set_fact:
unifi_network_application_data_dir: "/mnt/object_storage/services/unifi_network_application"
unifi_network_application_service_dir: "{{ ansible_env.HOME }}/services/unifi_network_application"
unifi_network_application_data_dir: "/mnt/services/unifi_network_application"
unifi_network_application_service_dir: "{{ ansible_env.HOME }}/.services/unifi_network_application"
- name: Create Unifi Network App directories
ansible.builtin.file:

View File

@@ -17,3 +17,7 @@ services:
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M

View File

@@ -3,8 +3,8 @@
block:
- name: Set WireGuard directories
ansible.builtin.set_fact:
wireguard_service_dir: "{{ ansible_env.HOME }}/services/wireguard"
wireguard_data_dir: "/mnt/object_storage/services/wireguard"
wireguard_service_dir: "{{ ansible_env.HOME }}/.services/wireguard"
wireguard_data_dir: "/mnt/services/wireguard"
- name: Create WireGuard directory
ansible.builtin.file:

View File

@@ -0,0 +1,51 @@
---
- name: Process 1Password custom allowed browsers
block:
- name: Check if 1Password is installed
ansible.builtin.command: 1password --version
register: onepassword_check
changed_when: false
failed_when: false
- name: Check if 1Password is running anywhere
ansible.builtin.command: pgrep 1password
register: onepassword_running
changed_when: false
failed_when: false
- name: Ensure 1Password custom allowed browsers directory exists
ansible.builtin.file:
path: /etc/1password
state: directory
mode: "0755"
become: true
- name: Add Browsers to 1Password custom allowed browsers
ansible.builtin.copy:
content: |
ZenBrowser
zen-browser
app.zen_browser.zen
zen
Firefox
firefox
opera
zen-x86_64
dest: /etc/1password/custom_allowed_browsers
owner: root
group: root
mode: "0755"
become: true
register: custom_browsers_file
- name: Kill any running 1Password instances if configuration changed
ansible.builtin.command: pkill 1password
when: custom_browsers_file.changed and onepassword_running.stdout != ""
changed_when: custom_browsers_file.changed and onepassword_running.stdout != ""
- name: If 1Password was killed, restart it...
ansible.builtin.command: screen -dmS 1password 1password
when: custom_browsers_file.changed and onepassword_running.stdout != ""
changed_when: custom_browsers_file.changed and onepassword_running.stdout != ""
tags:
- custom_allowed_browsers

View File

@@ -31,31 +31,30 @@
- name: Define system desired Flatpaks
ansible.builtin.set_fact:
desired_system_flatpaks:
# GNOME Software
- "{{ 'org.gnome.Extensions' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Weather' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Sudoku' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
# Games
- io.github.openhv.OpenHV
- net.lutris.Lutris
- info.beyondallreason.bar
- org.godotengine.Godot
- dev.bragefuglseth.Keypunch
- org.prismlauncher.PrismLauncher
# Multimedia
- com.spotify.Client
- com.plexamp.Plexamp
- tv.plex.PlexDesktop
- com.spotify.Client
# Messaging
- com.rtosta.zapzap
- org.telegram.desktop
- org.signal.Signal
- com.rtosta.zapzap
- io.github.equicord.equibop
- com.discordapp.Discord
# 3D Printing
- com.bambulab.BambuStudio
- io.mango3d.LycheeSlicer
# Utilities
- com.fastmail.Fastmail
- com.ranfdev.DistroShelf
- io.missioncenter.MissionCenter
- io.gitlab.elescoute.spacelaunch
@@ -63,7 +62,6 @@
- com.usebottles.bottles
- com.github.tchx84.Flatseal
- com.github.wwmm.easyeffects
- org.onlyoffice.desktopeditors
- io.gitlab.adhami3310.Impression
- io.ente.auth
- io.github.fastrizwaan.WineZGUI
@@ -76,6 +74,8 @@
- io.github.flattool.Ignition
- io.github.bytezz.IPLookup
- org.gaphor.Gaphor
- io.dbeaver.DBeaverCommunity
- com.jetpackduba.Gitnuro
- name: Define system desired Flatpak remotes
ansible.builtin.set_fact:

Some files were not shown because too many files have changed in this diff Show More