Compare commits

..

81 Commits

Author SHA1 Message Date
fd6e7d7a86 Update flake.lock
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-30 16:22:07 +01:00
b23536ecc7 chore: adds discord and gitnuro flatpaks
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-10-30 16:22:03 +01:00
14e9c8d51c chore: remove old stuff
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 7s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-10-30 16:21:17 +01:00
c1c98fa007 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-28 08:36:44 +01:00
9c6e6fdf47 Add Vicinae installation and assets Ansible task
Include Vicinae setup in workstation playbook for non-WSL2 systems

Update flake.lock to newer nixpkgs revision
2025-10-28 08:36:26 +01:00
a11376fe96 Add monitoring countries to allowed_countries_codes list
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-26 00:24:17 +00:00
e14dd1d224 Add EU and trusted country lists for Caddy access control
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 54s
Python Lint Check / check-python (push) Successful in 21s
Define separate lists for EU and trusted countries in group vars. Update
Caddyfile template to support EU, trusted, and combined allow lists.
Switch Sathub domains to use combined country allow list.
2025-10-26 00:21:27 +00:00
5353981555 Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 42s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:09:31 +00:00
f9ce652dfc flake lock
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-26 00:09:15 +00:00
fe9dbca2db Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 02:08:31 +02:00
987166420a Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:06:13 +00:00
8ba47c2ebf Fix indentation in server.yml and add necesse service
Add become: true to JuiceFS stop/start tasks in redis.yml
2025-10-26 00:04:51 +00:00
8bfd8395f5 Add Discord environment variables and update data volumes paths 2025-10-26 00:04:41 +00:00
f0b15f77a1 Update nixpkgs input to latest commit 2025-10-26 00:04:19 +00:00
461d251356 Add Ansible role to deploy Necesse server with Docker 2025-10-26 00:04:14 +00:00
e57e9ee67c chore: update country allow list and add European allow option 2025-10-26 02:02:46 +02:00
f67b16f593 update flake locvk 2025-10-26 02:02:28 +02:00
5edd7c413e Update bash.nix to improve WSL Windows alias handling 2025-10-26 02:02:21 +02:00
cfc1188b5f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-23 13:43:38 +02:00
e2701dcdf4 Set executable permission for equibop.desktop and update bash.nix
Add BUN_INSTALL env var and include Bun bin in PATH
2025-10-23 13:43:26 +02:00
11af7f16e5 Set formatter to prettier and update format_on_save option 2025-10-23 13:38:16 +02:00
310fb92ec9 Add WSL aliases for Windows SSH and Zed
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 51s
Python Lint Check / check-python (push) Successful in 15s
2025-10-23 04:20:15 +02:00
fb1661386b chore: add Bun install path and prepend to PATH
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 8s
2025-10-22 17:57:12 +02:00
e1b07a6edf Add WSL support and fix config formatting
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 1m17s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-22 16:18:08 +02:00
f6a3f6d379 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles 2025-10-21 10:06:20 +02:00
77424506d6 Update Nextcloud config and flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 0s
Nix Format Check / check-format (push) Failing after 0s
Python Lint Check / check-python (push) Failing after 0s
2025-10-20 11:27:21 +02:00
1856b2fb9e adds fastmail app as flatpak 2025-10-20 11:27:00 +02:00
2173e37c0a refactor: update configuration for mennos-server and adjust related tasks
Some checks failed
Nix Format Check / check-format (push) Successful in 1m22s
Python Lint Check / check-python (push) Successful in 25s
Ansible Lint Check / check-ansible (push) Failing after 1h7m12s
2025-10-16 14:53:32 +02:00
ba2faf114d chore: update sathub config
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Successful in 5s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 15:04:46 +02:00
22b308803c fixes
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
2dfde555dd sathub fixes
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
436deb267e Add smart alias configuration for rtlsdr 2025-10-08 13:01:37 +02:00
e490405dc5 Update mennos-rtlsdr-pc home configuration to enable service
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:54:34 +02:00
1485f6c430 Add home configuration for mennos-rtlsdr-pc
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:38:12 +02:00
4c83707a03 Update Ansible inventory and playbook for new workstation; modify Git configuration for rebase settings
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-10-08 12:37:59 +02:00
f9f37f5819 Update flatpaks.yml
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-30 12:02:26 +02:00
44c4521cbe Remove unnecessary blank line before sathub.nl configuration in Caddyfile
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m10s
Python Lint Check / check-python (push) Successful in 6s
2025-09-29 02:53:35 +02:00
6c37372bc0 Remove unused obj.sathub.de configuration and caddy_network from MinIO service in Docker Compose
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 02:40:25 +02:00
3a22417315 Add CORS configuration to SatHub service for improved API access
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 8s
2025-09-29 01:29:55 +02:00
95bc4540db Add SatHub service deployment with Docker Compose and configuration
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 01:21:41 +02:00
902d797480 Refactor Cloudreve restart logic and update configs
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 5s
- Refactor Cloudreve tasks to use conditional restart
- Remove unused displayData from Dashy config
- Add NVM and Japanese input setup to bash.nix
2025-09-25 22:33:57 +02:00
e494369d11 Refactor formatting in update.py for improved readability
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-24 18:40:25 +02:00
78f3133a1d Fix formatting in Python workflow and update .gitignore to include Ansible files
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 18:35:53 +02:00
d28c0fce66 Refactor shell aliases to move folder navigation aliases to the utility section
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 18:32:05 +02:00
c6449affcc Rename zed.jsonc.j2 to zed.jsonc and fix trailing commas
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m8s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:12:34 +02:00
d33f367c5f Move Zed config to Ansible template with 1Password secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 16:10:44 +02:00
e5723e0964 Update zed.jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:04:45 +02:00
0bc609760c change zed settings to use jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 13:36:10 +02:00
edd8e90fec Add JetBrains Toolbox autostart and update Zed config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 13:24:43 +02:00
ee0c73f6de chore: add ssh config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 11:55:46 +02:00
60dd31fd1c Add --system flag to update system packages in update.py
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 17:26:44 +02:00
cc917eb375 Refactor bash config and env vars, set Zed as git editor
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
- Move environment variable exports from sessionVariables to bashrc
- Add more robust sourcing of .profile and .bashrc.local
- Improve SSH_AUTH_SOCK override logic for 1Password
- Remove redundant path and pyenv logic from profileExtra
- Set git core.editor to "zed" instead of "nvim"
- Add DOTFILES_PATH to global session variables
2025-09-23 17:13:24 +02:00
df0775f3b2 Update symlinks.yml
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 16:39:31 +02:00
5f312d3128 wtf 2025-09-23 16:36:08 +02:00
497fca49d9 linting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:29:47 +00:00
e3ea18c9da updated file
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Successful in 1m16s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:20:57 +02:00
6fcabcd1f3 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Successful in 1m17s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:16:09 +02:00
3e25210f4c remove stash, add bazarr, add cloudreve 2025-09-23 16:13:09 +02:00
5ff84a4c0d Remove GNOME extension management from workstation setup
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 14:09:30 +00:00
29a439d095 Add isServer option and conditionally enable Git signing
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:07:10 +00:00
cfb80bd819 linting 2025-09-23 14:06:26 +00:00
8971d087a3 Remove secrets and auto-start actions and update imports
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:59:48 +00:00
40063cfe6b Refactor for consistent string quoting and formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:53:29 +00:00
2e5a06e9d5 Remove mennos-vm from inventory and playbook tasks
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:51:42 +00:00
80ea4cd51b Remove VSCode config and update Zed symlink and settings
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
- Delete VSCode settings and argv files
- Rename Zed settings file and update symlink destination
- Add new Zed context servers and projects
- Change icon and theme settings for Zed
- Add .gitkeep to autostart directory
2025-09-23 13:39:09 +00:00
c659c599f4 fixed formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:35:37 +00:00
54fc080ef2 Remove debug tasks from global.yml and update git signing config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Failing after 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:32:48 +00:00
3d5ae84a25 Add SSH insteadOf rule for git.mvl.sh
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 30s
Nix Format Check / check-format (push) Failing after 1m18s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:21:16 +00:00
dd3753fab4 refactor 2025-09-23 13:20:00 +00:00
a04a4abef6 chore: replace prusaslicer for bambulab slicer since it supports the same printers and works better
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 1s
2025-09-10 12:15:06 +02:00
fd5cb7f163 feat: add 3D printing applications to desired Flatpaks 2025-09-10 12:02:41 +02:00
2e5d7d39ef chore: move scrcpy package to Home Manager
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 1s
Nix Format Check / check-format (push) Failing after 1s
Python Lint Check / check-python (push) Failing after 1s
2025-09-09 15:51:49 +02:00
422509eecc Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 2s
Nix Format Check / check-format (push) Failing after 2s
Python Lint Check / check-python (push) Failing after 2s
2025-09-09 10:41:37 +02:00
c79142e117 Update editor settings and add new Zed projects 2025-09-09 10:41:03 +02:00
2834c1c34e Change VSCode theme to Catppuccin Latte and add new commands
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Failing after 1m16s
Python Lint Check / check-python (push) Failing after 6s
2025-09-04 14:10:21 +02:00
fe73569e0b Add Tdarr and Weather sections to Dashy config
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 10s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-09-04 14:10:00 +02:00
08d233cae5 Add object storage volume for slow TV shows in Plex config
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:43 +02:00
91c11b0283 Update flake.lock for home-manager and nixpkgs revisions
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-09-04 14:09:33 +02:00
50b0844db8 Move Sabnzbd to its own network and expose port 7788
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-09-02 11:07:54 +02:00
ad8cb0702d fix: increase memory limit to 2G for arr-stack services
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 12s
Nix Format Check / check-format (push) Failing after 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-08-31 01:43:00 +02:00
216d215663 fix: set dashy default to sametab and add extra hosts for status
resolving of local services and add comfyui to dashy
2025-08-31 01:42:22 +02:00
202 changed files with 2944 additions and 2818 deletions

202
.bashrc
View File

@@ -1,202 +0,0 @@
# HISTFILE Configuration (Bash equivalent)
HISTFILE=~/.bash_history
HISTSIZE=1000
HISTFILESIZE=2000 # Adjusted to match both histfile and size criteria
if [ -f /etc/os-release ]; then
distro=$(awk -F= '/^NAME/{print $ssss2}' /etc/os-release | tr -d '"')
if [[ "$distro" == *"Pop!_OS"* ]]; then
export CGO_CFLAGS="-I/usr/include"
fi
fi
# For microsoft-standard-WSL2 in uname -a
if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then
source $HOME/.agent-bridge.sh
alias winget='winget.exe'
fi
# Set SSH_AUTH_SOCK to ~/.1password/agent.sock, but only if we don't already have a SSH_AUTH_SOCK
if [ -z "$SSH_AUTH_SOCK" ]; then
export SSH_AUTH_SOCK=~/.1password/agent.sock
fi
# If brave is available as browser set CHROME_EXECUTABLE to that.
if command -v brave-browser &> /dev/null; then
export CHROME_EXECUTABLE=/usr/bin/brave-browser
fi
# Docker Compose Alias (Mostly for old shell scripts)
alias docker-compose='docker compose'
# Modern tools aliases
alias l="eza --header --long --git --group-directories-first --group --icons --color=always --sort=name --hyperlink -o --no-permissions"
alias ll='l'
alias la='l -a'
alias cat='bat'
alias du='dust'
alias df='duf'
alias augp='sudo apt update && sudo apt upgrade -y && sudo apt autopurge -y && sudo apt autoclean'
# Docker Aliases
alias d='docker'
alias dc='docker compose'
alias dce='docker compose exec'
alias dcl='docker compose logs'
alias dcd='docker compose down'
alias dcu='docker compose up'
alias dcp='docker compose ps'
alias dcps='docker compose ps'
alias dcpr='dcp && dcd && dcu -d && dcl -f'
alias dcr='dcd && dcu -d && dcl -f'
alias ddpul='docker compose down && docker compose pull && docker compose up -d && docker compose logs -f'
alias docker-nuke='docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker system prune --all --volumes --force && docker volume prune --force'
# Git aliases
alias g='git'
alias gg='git pull'
alias gl='git log --stat'
alias gp='git push'
alias gs='git status -s'
alias gst='git status'
alias ga='git add'
alias gc='git commit'
alias gcm='git commit -m'
alias gco='git checkout'
alias gcb='git checkout -b'
# Kubernetes aliases (Minikube)
alias kubectl="minikube kubectl --"
alias zeditor=~/.local/bin/zed
alias zed=~/.local/bin/zed
alias ssh="~/.local/bin/smart-ssh"
# random string (Syntax: random <length>)
alias random='openssl rand -base64'
# Alias for ls to l but only if it's an interactive shell because we don't want to override ls in scripts which could blow up in our face
if [ -t 1 ]; then
alias ls='l'
fi
# PATH Manipulation
export DOTFILES_PATH=$HOME/.dotfiles
export PATH=$PATH:$HOME/.local/bin
export PATH=$PATH:$HOME/.cargo/bin
export PATH=$PATH:$DOTFILES_PATH/bin
export PATH="/usr/bin:$PATH"
if [ -d /usr/lib/pkgconfig ]; then
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/share/pkgconfig:$PKG_CONFIG_PATH
fi
# Include spicetify if it exists
if [ -d "$HOME/.spicetify" ]; then
export PATH=$PATH:$HOME/.spicetify
fi
# Include pyenv if it exists
if [ -d "$HOME/.pyenv" ]; then
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
fi
# Include pnpm if it exists
if [ -d "$HOME/.local/share/pnpm" ]; then
export PATH=$PATH:$HOME/.local/share/pnpm
fi
# Miniconda
export PATH="$HOME/miniconda3/bin:$PATH"
# In case $HOME/.flutter/flutter/bin is found, we can add it to the PATH
if [ -d "$HOME/.flutter/flutter/bin" ]; then
export PATH=$PATH:$HOME/.flutter/flutter/bin
export PATH="$PATH":"$HOME/.pub-cache/bin"
# Flutter linux fixes:
export CPPFLAGS="-I/usr/include"
export LDFLAGS="-L/usr/lib/x86_64-linux-gnu -lbz2"
export PKG_CONFIG_PATH=/usr/lib/x86_64-linux-gnu/pkgconfig:$PKG_CONFIG_PATH
fi
# Add flatpak to XDG_DATA_DIRS
export XDG_DATA_DIRS=$XDG_DATA_DIRS:/usr/share:/var/lib/flatpak/exports/share:$HOME/.local/share/flatpak/exports/share
# Allow unfree nixos
export NIXPKGS_ALLOW_UNFREE=1
# Allow insecure nixpkgs
export NIXPKGS_ALLOW_INSECURE=1
# Tradaware / DiscountOffice Configuration
if [ -d "/home/menno/Projects/Work" ]; then
export TRADAWARE_DEVOPS=true
fi
# 1Password Source Plugin (Assuming bash compatibility)
if [ -f /home/menno/.config/op/plugins.sh ]; then
source /home/menno/.config/op/plugins.sh
fi
# Initialize starship if available
if ! command -v starship &> /dev/null; then
echo "FYI, starship not found"
else
export STARSHIP_ENABLE_RIGHT_PROMPT=true
export STARSHIP_ENABLE_BASH_CONTINUATION=true
eval "$(starship init bash)"
fi
# Read .op_sat
if [ -f ~/.op_sat ]; then
export OP_SERVICE_ACCOUNT_TOKEN=$(cat ~/.op_sat)
# Ensure .op_sat is 0600 and only readable by the owner
if [ "$(stat -c %a ~/.op_sat)" != "600" ]; then
echo "WARNING: ~/.op_sat is not 0600, please fix this!"
fi
if [ "$(stat -c %U ~/.op_sat)" != "$(whoami)" ]; then
echo "WARNING: ~/.op_sat is not owned by the current user, please fix this!"
fi
fi
# Source nix home-manager
if [ -f "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh" ]; then
. "$HOME/.nix-profile/etc/profile.d/hm-session-vars.sh"
fi
# Source ble.sh if it exists
if [[ -f "${HOME}/.nix-profile/share/blesh/ble.sh" ]]; then
source "${HOME}/.nix-profile/share/blesh/ble.sh"
# Custom function for fzf history search
function fzf_history_search() {
local selected
selected=$(history | fzf --tac --height=40% --layout=reverse --border --info=inline \
--query="$READLINE_LINE" \
--color 'fg:#ebdbb2,bg:#282828,hl:#fabd2f,fg+:#ebdbb2,bg+:#3c3836,hl+:#fabd2f' \
--color 'info:#83a598,prompt:#bdae93,spinner:#fabd2f,pointer:#83a598,marker:#fe8019,header:#665c54' \
| sed 's/^ *[0-9]* *//')
if [[ -n "$selected" ]]; then
READLINE_LINE="$selected"
READLINE_POINT=${#selected}
fi
ble-redraw-prompt
}
# Bind Ctrl+R to our custom function
bind -x '"\C-r": fzf_history_search'
fi
# In case a basrc.local exists, source it
if [ -f $HOME/.bashrc.local ]; then
source $HOME/.bashrc.local
fi
# Display a welcome message for interactive shells
if [ -t 1 ]; then
helloworld
fi

View File

@@ -3,7 +3,7 @@ name: Python Lint Check
on: on:
pull_request: pull_request:
push: push:
branches: [ master ] branches: [master]
jobs: jobs:
check-python: check-python:
@@ -29,7 +29,7 @@ jobs:
exit 0 exit 0
fi fi
pylint $python_files pylint --exit-zero $python_files
- name: Check Black formatting - name: Check Black formatting
run: | run: |

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
logs/* logs/*
**/__pycache__/ **/__pycache__/
.ansible/
.ansible/.lock

View File

@@ -1,16 +1,13 @@
# Setup # Setup
This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux. This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux.
Please install a clean version of either distro with GNOME and then follow the steps below. Please install a clean version of either distro and then follow the steps below.
## Installation ## Installation
### 0. Install distro ### 0. Install distro
Download the latest ISO from your desired distro and write it to a USB stick. Download the latest ISO from your desired distro and write it to a USB stick.
I'd recommend getting the GNOME version as it's easier to setup unless you're planning on setting up a server, in that case I recommend getting the server ISO for the specific distro.
#### Note: If you intend on using a desktop environment you should select the GNOME version as this dotfiles repository expects the GNOME desktop environment for various configurations
### 1. Clone dotfiles to home directory ### 1. Clone dotfiles to home directory
@@ -44,15 +41,6 @@ Run the `dotf update` command, although the setup script did most of the work so
dotf update dotf update
``` ```
### 5. Decrypt secrets
Either using 1Password or by manualling providing the decryption key you should decrypt the secrets.
Various configurations depend on the secrets to be decrypted such as the SSH keys, yubikey pam configuration and more.
```bash
dotf secrets decrypt
```
### 6. Profit ### 6. Profit
You should now have a fully setup system with all the configurations applied. You should now have a fully setup system with all the configurations applied.
@@ -65,12 +53,13 @@ Here are some paths that contain files named after the hostname of the system.
If you add a new system you should add the relevant files to these paths. If you add a new system you should add the relevant files to these paths.
- `config/ssh/authorized_keys`: Contains the public keys per hostname that will be symlinked to the `~/.ssh/authorized_keys` file. - `config/ssh/authorized_keys`: Contains the public keys per hostname that will be symlinked to the `~/.ssh/authorized_keys` file.
- `config/home-manager/flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration. - `flake.nix`: Contains an array `homeConfigurations` where you should be adding the new system hostname and relevant configuration.
### Server reboots ### Server reboots
In case you reboot a server, it's likely that this runs JuiceFS. In case you reboot a server, it's likely that this runs JuiceFS.
To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online. To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online.
```bash ```bash
dotf service stop --all dotf service stop --all
df # confirm JuiceFS is mounted df # confirm JuiceFS is mounted
@@ -81,16 +70,19 @@ dotf service start --all
In case you need to adjust anything regarding the /mnt/object_storage JuiceFS. In case you need to adjust anything regarding the /mnt/object_storage JuiceFS.
Ensure to shut down all services: Ensure to shut down all services:
```bash ```bash
dotf service stop --all dotf service stop --all
``` ```
Unmount the volume: Unmount the volume:
```bash ```bash
sudo systemctl stop juicefs sudo systemctl stop juicefs
``` ```
And optionally if you're going to do something with metadata you might need to stop redis too. And optionally if you're going to do something with metadata you might need to stop redis too.
```bash ```bash
cd ~/services/juicefs-redis/ cd ~/services/juicefs-redis/
docker compose down --remove-orphans docker compose down --remove-orphans
@@ -103,6 +95,7 @@ To add a new system you should follow these steps:
1. Add the relevant files shown in the section above. 1. Add the relevant files shown in the section above.
2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system. 2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system.
3. Run `dotf update` to ensure the symlinks are properly updated/created. 3. Run `dotf update` to ensure the symlinks are properly updated/created.
--- ---
## Using 1Password SSH Agent with WSL2 (Windows 11) ## Using 1Password SSH Agent with WSL2 (Windows 11)
@@ -132,5 +125,6 @@ This setup allows you to use your 1Password-managed SSH keys inside WSL2. The WS
- If your 1Password keys are listed, the setup is complete. - If your 1Password keys are listed, the setup is complete.
#### References #### References
- [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m) - [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m)
- [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows) - [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows)

View File

@@ -0,0 +1,82 @@
---
flatpaks: false
install_ui_apps: false
# European countries for EU-specific access control
eu_countries_codes:
- AL # Albania
- AD # Andorra
- AM # Armenia
- AT # Austria
- AZ # Azerbaijan
# - BY # Belarus (Belarus is disabled due to geopolitical reasons)
- BE # Belgium
- BA # Bosnia and Herzegovina
- BG # Bulgaria
- HR # Croatia
- CY # Cyprus
- CZ # Czech Republic
- DK # Denmark
- EE # Estonia
- FI # Finland
- FR # France
- GE # Georgia
- DE # Germany
- GR # Greece
- HU # Hungary
- IS # Iceland
- IE # Ireland
- IT # Italy
- XK # Kosovo
- LV # Latvia
- LI # Liechtenstein
- LT # Lithuania
- LU # Luxembourg
- MK # North Macedonia
- MT # Malta
- MD # Moldova
- MC # Monaco
- ME # Montenegro
- NL # Netherlands
- NO # Norway
- PL # Poland
- PT # Portugal
- RO # Romania
# - RU # Russia (Russia is disabled due to geopolitical reasons)
- SM # San Marino
- RS # Serbia
- SK # Slovakia
- SI # Slovenia
- ES # Spain
- SE # Sweden
- CH # Switzerland
- TR # Turkey
- UA # Ukraine
- GB # United Kingdom
- VA # Vatican City
# Trusted non-EU countries for extended access control
trusted_countries_codes:
- US # United States
- AU # Australia
- NZ # New Zealand
- JP # Japan
# Countries that are allowed to access the server Caddy reverse proxy
allowed_countries_codes:
- US # United States
- GB # United Kingdom
- DE # Germany
- FR # France
- IT # Italy
- NL # Netherlands
- JP # Japan
- KR # South Korea
- CH # Switzerland
- AU # Australia (Added for UpDown.io to monitor server uptime)
- CA # Canada (Added for UpDown.io to monitor server uptime)
- FI # Finland (Added for UpDown.io to monitor server uptime)
- SG # Singapore (Added for UpDown.io to monitor server uptime)
# Enable/disable country blocking globally
enable_country_blocking: true

View File

@@ -3,6 +3,9 @@ mennos-laptop ansible_connection=local
mennos-desktop ansible_connection=local mennos-desktop ansible_connection=local
[servers] [servers]
mennos-vps ansible_connection=local
mennos-server ansible_connection=local mennos-server ansible_connection=local
mennos-vm ansible_connection=local mennos-rtlsdr-pc ansible_connection=local
mennos-desktop ansible_connection=local
[wsl]
mennos-desktopw ansible_connection=local

19
ansible/playbook.yml Normal file
View File

@@ -0,0 +1,19 @@
---
- name: Configure all hosts
hosts: all
handlers:
- name: Import handler tasks
ansible.builtin.import_tasks: handlers/main.yml
gather_facts: true
tasks:
- name: Include global tasks
ansible.builtin.import_tasks: tasks/global/global.yml
- name: Include workstation tasks
ansible.builtin.import_tasks: tasks/workstations/workstation.yml
when: inventory_hostname in ['mennos-laptop', 'mennos-desktop']
- name: Include server tasks
ansible.builtin.import_tasks: tasks/servers/server.yml
when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc', 'mennos-desktopw']

View File

@@ -1,21 +1,9 @@
--- ---
- name: Include global symlinks tasks
ansible.builtin.import_tasks: tasks/global/symlinks.yml
- name: Gather package facts - name: Gather package facts
ansible.builtin.package_facts: ansible.builtin.package_facts:
manager: auto manager: auto
become: true become: true
- name: Debug ansible_facts for troubleshooting
ansible.builtin.debug:
msg: |
OS Family: {{ ansible_facts['os_family'] }}
Distribution: {{ ansible_facts['distribution'] }}
Package Manager: {{ ansible_pkg_mgr }}
Kernel: {{ ansible_kernel }}
tags: debug
- name: Include Tailscale tasks - name: Include Tailscale tasks
ansible.builtin.import_tasks: tasks/global/tailscale.yml ansible.builtin.import_tasks: tasks/global/tailscale.yml
become: true become: true
@@ -131,7 +119,7 @@
ansible.builtin.replace: ansible.builtin.replace:
path: /etc/sudoers path: /etc/sudoers
regexp: '^Defaults\s+env_reset(?!.*pwfeedback)' regexp: '^Defaults\s+env_reset(?!.*pwfeedback)'
replace: 'Defaults env_reset,pwfeedback' replace: "Defaults env_reset,pwfeedback"
validate: 'visudo -cf %s' validate: "visudo -cf %s"
become: true become: true
tags: sudoers tags: sudoers

View File

@@ -15,14 +15,14 @@
- name: Scan utils folder for files - name: Scan utils folder for files
ansible.builtin.find: ansible.builtin.find:
paths: "{{ dotfiles_path }}/config/ansible/tasks/global/utils" paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: file file_type: file
register: utils_files register: utils_files
become: false become: false
- name: Scan utils folder for Go projects (directories with go.mod) - name: Scan utils folder for Go projects (directories with go.mod)
ansible.builtin.find: ansible.builtin.find:
paths: "{{ dotfiles_path }}/config/ansible/tasks/global/utils" paths: "{{ dotfiles_path }}/ansible/tasks/global/utils"
file_type: directory file_type: directory
recurse: true recurse: true
register: utils_dirs register: utils_dirs

View File

@@ -13,6 +13,12 @@ smart_aliases:
desktop: desktop:
primary: "desktop-local" primary: "desktop-local"
fallback: "desktop" fallback: "desktop"
check_host: "192.168.1.250"
timeout: "2s"
server:
primary: "server-local"
fallback: "server"
check_host: "192.168.1.254" check_host: "192.168.1.254"
timeout: "2s" timeout: "2s"
@@ -22,6 +28,12 @@ smart_aliases:
check_host: "192.168.1.253" check_host: "192.168.1.253"
timeout: "2s" timeout: "2s"
rtlsdr:
primary: "rtlsdr-local"
fallback: "rtlsdr"
check_host: "192.168.1.252"
timeout: "2s"
# Background SSH Tunnel Definitions # Background SSH Tunnel Definitions
tunnels: tunnels:
# Example: Desktop database tunnel # Example: Desktop database tunnel

View File

@@ -70,13 +70,16 @@ type Config struct {
} }
const ( const (
realSSHPath = "/usr/bin/ssh" defaultSSHPath = "/usr/bin/ssh"
wslSSHPath = "ssh.exe"
wslDetectPath = "/mnt/c/Windows/System32/cmd.exe"
) )
var ( var (
configDir string configDir string
tunnelsDir string tunnelsDir string
config *Config config *Config
sshPath string // Will be set based on WSL2 detection
// Global flags // Global flags
tunnelMode bool tunnelMode bool
@@ -110,6 +113,9 @@ var tunnelCmd = &cobra.Command{
} }
func init() { func init() {
// Detect and set SSH path based on environment (WSL2 vs native Linux)
sshPath = detectSSHPath()
// Initialize config directory // Initialize config directory
homeDir, err := os.UserHomeDir() homeDir, err := os.UserHomeDir()
if err != nil { if err != nil {
@@ -141,6 +147,13 @@ func init() {
// Initialize logging // Initialize logging
initLogging(config.Logging) initLogging(config.Logging)
// Log SSH path detection (after logging is initialized)
if isWSL2() {
log.Debug().Str("ssh_path", sshPath).Msg("WSL2 detected, using Windows SSH")
} else {
log.Debug().Str("ssh_path", sshPath).Msg("Native Linux environment, using Linux SSH")
}
// Global flags // Global flags
rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode") rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode")
rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel") rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel")
@@ -169,6 +182,22 @@ func init() {
} }
} }
// detectSSHPath determines the correct SSH binary path based on the environment
func detectSSHPath() string {
if isWSL2() {
// In WSL2, use Windows SSH
return wslSSHPath
}
// Default to Linux SSH
return defaultSSHPath
}
// isWSL2 checks if we're running in WSL2 by looking for Windows System32
func isWSL2() bool {
_, err := os.Stat(wslDetectPath)
return err == nil
}
func main() { func main() {
// Check if this is a tunnel command first // Check if this is a tunnel command first
args := os.Args[1:] args := os.Args[1:]
@@ -563,7 +592,7 @@ func openTunnel(name string) error {
log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel") log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel")
// Start SSH process // Start SSH process
cmd := exec.Command(realSSHPath, cmdArgs[1:]...) cmd := exec.Command(sshPath, cmdArgs[1:]...)
// Capture stderr to see any SSH errors // Capture stderr to see any SSH errors
var stderr bytes.Buffer var stderr bytes.Buffer
@@ -708,7 +737,9 @@ func createAdhocTunnel() (TunnelDefinition, error) {
} }
func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string { func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string {
args := []string{"ssh", "-f", "-N"} // Use the detected SSH path basename for the command
sshBinary := filepath.Base(sshPath)
args := []string{sshBinary, "-f", "-N"}
switch tunnel.Type { switch tunnel.Type {
case "local": case "local":
@@ -1056,18 +1087,37 @@ func findSSHProcessByPort(port int) int {
// executeRealSSH executes the real SSH binary with given arguments // executeRealSSH executes the real SSH binary with given arguments
func executeRealSSH(args []string) { func executeRealSSH(args []string) {
// Check if real SSH exists log.Debug().Str("ssh_path", sshPath).Strs("args", args).Msg("Executing real SSH")
if _, err := os.Stat(realSSHPath); os.IsNotExist(err) {
log.Error().Str("path", realSSHPath).Msg("Real SSH binary not found") // In WSL2, we need to use exec.Command instead of syscall.Exec for Windows binaries
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", realSSHPath) if isWSL2() {
cmd := exec.Command(sshPath, args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
if err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
os.Exit(exitErr.ExitCode())
}
log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)
os.Exit(1)
}
os.Exit(0)
}
// For native Linux, check if SSH exists
if _, err := os.Stat(sshPath); os.IsNotExist(err) {
log.Error().Str("path", sshPath).Msg("Real SSH binary not found")
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", sshPath)
os.Exit(1) os.Exit(1)
} }
log.Debug().Str("ssh_path", realSSHPath).Strs("args", args).Msg("Executing real SSH") // Execute the real SSH binary using syscall.Exec (Linux only)
// This replaces the current process (like exec in shell)
// Execute the real SSH binary err := syscall.Exec(sshPath, append([]string{"ssh"}, args...), os.Environ())
// Using syscall.Exec to replace current process (like exec in shell)
err := syscall.Exec(realSSHPath, append([]string{"ssh"}, args...), os.Environ())
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to execute SSH") log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err) fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)

View File

@@ -18,7 +18,7 @@
#!/bin/bash #!/bin/bash
# Run dynamic DNS update (binary compiled by utils.yml) # Run dynamic DNS update (binary compiled by utils.yml)
{{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl" 2>&1 | logger -t dynamic-dns {{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl,sathub.de,sathub.nl" 2>&1 | logger -t dynamic-dns
become: true become: true
- name: Create dynamic DNS systemd timer - name: Create dynamic DNS systemd timer
@@ -83,6 +83,6 @@
- Manual run: sudo /usr/local/bin/dynamic-dns-update.sh - Manual run: sudo /usr/local/bin/dynamic-dns-update.sh
- Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl - Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl
when: inventory_hostname == 'mennos-desktop' when: inventory_hostname == 'mennos-server' or inventory_hostname == 'mennos-vps'
tags: tags:
- dynamic-dns - dynamic-dns

View File

@@ -70,7 +70,7 @@
- name: Include JuiceFS Redis tasks - name: Include JuiceFS Redis tasks
ansible.builtin.include_tasks: services/redis/redis.yml ansible.builtin.include_tasks: services/redis/redis.yml
when: inventory_hostname == 'mennos-desktop' when: inventory_hostname == 'mennos-server'
- name: Enable and start JuiceFS service - name: Enable and start JuiceFS service
ansible.builtin.systemd: ansible.builtin.systemd:

View File

@@ -0,0 +1,165 @@
---
- name: Server setup
block:
- name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package:
name: openssh
state: present
when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package:
name: openssh-server
state: present
when: ansible_pkg_mgr != 'pacman'
- name: Ensure Borg is installed on Arch-based systems
ansible.builtin.package:
name: borg
state: present
become: true
when: ansible_pkg_mgr == 'pacman'
- name: Ensure Borg is installed on Debian/Ubuntu systems
ansible.builtin.package:
name: borgbackup
state: present
become: true
when: ansible_pkg_mgr != 'pacman'
- name: Include JuiceFS tasks
ansible.builtin.include_tasks: juicefs.yml
tags:
- juicefs
- name: Include Dynamic DNS tasks
ansible.builtin.include_tasks: dynamic-dns.yml
tags:
- dynamic-dns
- name: Include Borg Backup tasks
ansible.builtin.include_tasks: borg-backup.yml
tags:
- borg-backup
- name: Include Borg Local Sync tasks
ansible.builtin.include_tasks: borg-local-sync.yml
tags:
- borg-local-sync
- name: System performance optimizations
ansible.posix.sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: true
become: true
loop:
- { name: "fs.file-max", value: "2097152" } # Max open files for the entire system
- { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have
- { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory
- { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches
- { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket
- { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input
- { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state
- { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections
- name: Include service tasks
ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml"
loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}"
loop_control:
label: "{{ item.name }}"
tags:
- services
- always
vars:
services:
- name: dashy
enabled: true
hosts:
- mennos-server
- name: gitea
enabled: true
hosts:
- mennos-server
- name: factorio
enabled: true
hosts:
- mennos-server
- name: dozzle
enabled: true
hosts:
- mennos-server
- name: beszel
enabled: true
hosts:
- mennos-server
- name: caddy
enabled: true
hosts:
- mennos-server
- name: golink
enabled: true
hosts:
- mennos-server
- name: immich
enabled: true
hosts:
- mennos-server
- name: plex
enabled: true
hosts:
- mennos-server
- name: tautulli
enabled: true
hosts:
- mennos-server
- name: downloaders
enabled: true
hosts:
- mennos-server
- name: wireguard
enabled: true
hosts:
- mennos-server
- name: nextcloud
enabled: true
hosts:
- mennos-server
- name: cloudreve
enabled: true
hosts:
- mennos-server
- name: echoip
enabled: true
hosts:
- mennos-server
- name: arr-stack
enabled: true
hosts:
- mennos-server
- name: home-assistant
enabled: true
hosts:
- mennos-server
- name: privatebin
enabled: true
hosts:
- mennos-server
- name: unifi-network-application
enabled: true
hosts:
- mennos-server
- name: avorion
enabled: false
hosts:
- mennos-server
- name: sathub
enabled: true
hosts:
- mennos-server
- name: necesse
enabled: true
hosts:
- mennos-server

View File

@@ -35,3 +35,4 @@
tags: tags:
- services - services
- arr_stack - arr_stack
- arr-stack

View File

@@ -20,7 +20,7 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
memory: 1G memory: 2G
sonarr: sonarr:
image: linuxserver/sonarr:latest image: linuxserver/sonarr:latest
@@ -42,20 +42,21 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
memory: 1G memory: 2G
whisparr: bazarr:
image: ghcr.io/hotio/whisparr:latest image: ghcr.io/hotio/bazarr:latest
container_name: bazarr
environment: environment:
- PUID=1000 - PUID=1000
- PGID=100 - PGID=100
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
ports: ports:
- 6969:6969 - 6767:6767
extra_hosts: extra_hosts:
- host.docker.internal:host-gateway - host.docker.internal:host-gateway
volumes: volumes:
- {{ arr_stack_data_dir }}/whisparr-config:/config - {{ arr_stack_data_dir }}/bazarr-config:/config
- /mnt/data:/mnt/data - /mnt/data:/mnt/data
restart: unless-stopped restart: unless-stopped
networks: networks:
@@ -63,7 +64,7 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
memory: 1G memory: 512M
prowlarr: prowlarr:
container_name: prowlarr container_name: prowlarr
@@ -127,6 +128,53 @@ services:
limits: limits:
memory: 512M memory: 512M
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
environment:
- PUID=1000
- PGID=100
- TZ=Europe/Amsterdam
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
- internalNode=true
- inContainer=true
- ffmpegVersion=7
- nodeName=MyInternalNode
- auth=false
- openBrowser=true
- maxLogSizeMB=10
- cronPluginUpdate=
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- {{ arr_stack_data_dir }}/tdarr-server:/app/server
- {{ arr_stack_data_dir }}/tdarr-config:/app/configs
- {{ arr_stack_data_dir }}/tdarr-logs:/app/logs
- /mnt/data:/media
- {{ arr_stack_data_dir }}/tdarr-cache:/temp
ports:
- 8265:8265
- 8266:8266
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
runtime: nvidia
devices:
- /dev/dri:/dev/dri
networks:
- arr_stack_net
deploy:
resources:
limits:
memory: 4G
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
networks: networks:
arr_stack_net: arr_stack_net:
caddy_network: caddy_network:

View File

@@ -5,9 +5,9 @@
} }
} }
# Country blocking snippet using MaxMind GeoLocation - reusable across all sites # Country allow list snippet using MaxMind GeoLocation - reusable across all sites
{% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %} {% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %}
(country_block) { (country_allow) {
@allowed_local { @allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1 remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
} }
@@ -23,68 +23,170 @@
respond @not_allowed_countries "Access denied" 403 respond @not_allowed_countries "Access denied" 403
} }
{% else %} {% else %}
(country_block) { (country_allow) {
# Country blocking disabled # Country allow list disabled
} }
{% endif %} {% endif %}
{% if inventory_hostname == 'mennos-desktop' %} # European country allow list - allows all European countries only
{% if eu_countries_codes | default([]) | length > 0 %}
(eu_country_allow) {
@eu_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@eu_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ eu_countries_codes | join(' ') }}
}
}
}
respond @eu_not_allowed_countries "Access denied" 403
}
{% else %}
(eu_country_allow) {
# EU country allow list disabled
}
{% endif %}
# Trusted country allow list - allows US, Australia, New Zealand, and Japan
{% if trusted_countries_codes | default([]) | length > 0 %}
(trusted_country_allow) {
@trusted_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@trusted_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ trusted_countries_codes | join(' ') }}
}
}
}
respond @trusted_not_allowed_countries "Access denied" 403
}
{% else %}
(trusted_country_allow) {
# Trusted country allow list disabled
}
{% endif %}
# Sathub country allow list - combines EU and trusted countries
{% if eu_countries_codes | default([]) | length > 0 and trusted_countries_codes | default([]) | length > 0 %}
(sathub_country_allow) {
@sathub_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@sathub_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ (eu_countries_codes + trusted_countries_codes) | join(' ') }}
}
}
}
respond @sathub_not_allowed_countries "Access denied" 403
}
{% else %}
(sathub_country_allow) {
# Sathub country allow list disabled
}
{% endif %}
{% if inventory_hostname == 'mennos-server' %}
git.mvl.sh { git.mvl.sh {
import country_block import country_allow
reverse_proxy gitea:3000 reverse_proxy gitea:3000
tls {{ caddy_email }} tls {{ caddy_email }}
} }
git.vleeuwen.me { git.vleeuwen.me {
import country_block import country_allow
redir https://git.mvl.sh{uri} redir https://git.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
df.mvl.sh { df.mvl.sh {
import country_block import country_allow
redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.mvl.sh { fsm.mvl.sh {
import country_block import country_allow
reverse_proxy factorio-server-manager:80 reverse_proxy factorio-server-manager:80
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.vleeuwen.me { fsm.vleeuwen.me {
import country_block import country_allow
redir https://fsm.mvl.sh{uri} redir https://fsm.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.mvl.sh { beszel.mvl.sh {
import country_block import country_allow
reverse_proxy beszel:8090 reverse_proxy beszel:8090
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.vleeuwen.me { beszel.vleeuwen.me {
import country_block import country_allow
redir https://beszel.mvl.sh{uri} redir https://beszel.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
sathub.de {
import sathub_country_allow
handle {
reverse_proxy sathub-frontend:4173
}
# Enable compression
encode gzip
# Security headers
header {
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Strict-Transport-Security "max-age=31536000; includeSubDomains"
}
tls {{ caddy_email }}
}
api.sathub.de {
import sathub_country_allow
reverse_proxy sathub-backend:4001
tls {{ caddy_email }}
}
sathub.nl {
import sathub_country_allow
redir https://sathub.de{uri}
tls {{ caddy_email }}
}
photos.mvl.sh { photos.mvl.sh {
import country_block import country_allow
reverse_proxy immich:2283 reverse_proxy immich:2283
tls {{ caddy_email }} tls {{ caddy_email }}
} }
photos.vleeuwen.me { photos.vleeuwen.me {
import country_block import country_allow
redir https://photos.mvl.sh{uri} redir https://photos.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
home.mvl.sh { home.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -93,7 +195,7 @@ home.mvl.sh {
} }
home.vleeuwen.me { home.vleeuwen.me {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -127,13 +229,13 @@ hotspot.mvl.sh:80 {
} }
bin.mvl.sh { bin.mvl.sh {
import country_block import country_allow
reverse_proxy privatebin:8080 reverse_proxy privatebin:8080
tls {{ caddy_email }} tls {{ caddy_email }}
} }
ip.mvl.sh ip.vleeuwen.me { ip.mvl.sh ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
@@ -141,26 +243,26 @@ ip.mvl.sh ip.vleeuwen.me {
} }
http://ip.mvl.sh http://ip.vleeuwen.me { http://ip.mvl.sh http://ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
} }
overseerr.mvl.sh { overseerr.mvl.sh {
import country_block import country_allow
reverse_proxy overseerr:5055 reverse_proxy overseerr:5055
tls {{ caddy_email }} tls {{ caddy_email }}
} }
overseerr.vleeuwen.me { overseerr.vleeuwen.me {
import country_block import country_allow
redir https://overseerr.mvl.sh{uri} redir https://overseerr.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
plex.mvl.sh { plex.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:32400 { reverse_proxy host.docker.internal:32400 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -169,13 +271,13 @@ plex.mvl.sh {
} }
plex.vleeuwen.me { plex.vleeuwen.me {
import country_block import country_allow
redir https://plex.mvl.sh{uri} redir https://plex.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
tautulli.mvl.sh { tautulli.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8181 { reverse_proxy host.docker.internal:8181 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -184,13 +286,37 @@ tautulli.mvl.sh {
} }
tautulli.vleeuwen.me { tautulli.vleeuwen.me {
import country_block import country_allow
redir https://tautulli.mvl.sh{uri} redir https://tautulli.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
cloud.mvl.sh {
import country_allow
reverse_proxy cloudreve:5212 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
cloud.vleeuwen.me {
import country_allow
redir https://cloud.mvl.sh{uri}
tls {{ caddy_email }}
}
collabora.mvl.sh {
import country_allow
reverse_proxy collabora:9980 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
drive.mvl.sh drive.vleeuwen.me { drive.mvl.sh drive.vleeuwen.me {
import country_block import country_allow
# CalDAV and CardDAV redirects # CalDAV and CardDAV redirects
redir /.well-known/carddav /remote.php/dav/ 301 redir /.well-known/carddav /remote.php/dav/ 301

View File

@@ -0,0 +1,32 @@
- name: Deploy Cloudreve service
tags:
- services
- cloudreve
block:
- name: Set Cloudreve directories
ansible.builtin.set_fact:
cloudreve_service_dir: "{{ ansible_env.HOME }}/.services/cloudreve"
cloudreve_data_dir: "/mnt/services/cloudreve"
- name: Create Cloudreve directory
ansible.builtin.file:
path: "{{ cloudreve_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Cloudreve docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ cloudreve_service_dir }}/docker-compose.yml"
mode: "0644"
register: cloudreve_compose
- name: Stop Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
when: cloudreve_compose.changed
- name: Start Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" up -d
changed_when: false
when: cloudreve_compose.changed

View File

@@ -0,0 +1,67 @@
services:
cloudreve:
image: cloudreve/cloudreve:latest
depends_on:
- postgresql
- redis
restart: always
ports:
- 5212:5212
networks:
- caddy_network
- cloudreve
environment:
- CR_CONF_Database.Type=postgres
- CR_CONF_Database.Host=postgresql
- CR_CONF_Database.User=cloudreve
- CR_CONF_Database.Name=cloudreve
- CR_CONF_Database.Port=5432
- CR_CONF_Redis.Server=redis:6379
volumes:
- {{ cloudreve_data_dir }}/data:/cloudreve/data
postgresql:
image: postgres:17
environment:
- POSTGRES_USER=cloudreve
- POSTGRES_DB=cloudreve
- POSTGRES_HOST_AUTH_METHOD=trust
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/postgres:/var/lib/postgresql/data
collabora:
image: collabora/code
restart: unless-stopped
ports:
- 9980:9980
environment:
- domain=collabora\\.mvl\\.sh
- username=admin
- password=Dt3hgIJOPr3rgh
- dictionaries=en_US
- TZ=Europe/Amsterdam
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
networks:
- cloudreve
- caddy_network
deploy:
resources:
limits:
memory: 1G
redis:
image: redis:latest
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/redis:/data
networks:
cloudreve:
name: cloudreve
driver: bridge
caddy_network:
name: caddy_default
external: true

View File

@@ -5,30 +5,37 @@ sections:
- name: Selfhosted - name: Selfhosted
items: items:
- title: Plex - title: Plex
icon: http://mennos-desktop:4000/assets/plex.svg icon: http://mennos-server:4000/assets/plex.svg
url: https://plex.mvl.sh/identity url: https://plex.mvl.sh
statusCheckUrl: https://plex.mvl.sh/identity
statusCheck: true statusCheck: true
id: 0_1035_plex id: 0_1035_plex
- title: Tautulli - title: Tautulli
icon: http://mennos-desktop:4000/assets/tautulli.svg icon: http://mennos-server:4000/assets/tautulli.svg
url: https://tautulli.mvl.sh url: https://tautulli.mvl.sh
id: 1_1035_tautulli id: 1_1035_tautulli
statusCheck: true statusCheck: true
- title: Overseerr - title: Overseerr
icon: http://mennos-desktop:4000/assets/overseerr.svg icon: http://mennos-server:4000/assets/overseerr.svg
url: https://overseerr.mvl.sh url: https://overseerr.mvl.sh
id: 2_1035_overseerr id: 2_1035_overseerr
statusCheck: true statusCheck: true
- title: Immich - title: Immich
icon: http://mennos-desktop:4000/assets/immich.svg icon: http://mennos-server:4000/assets/immich.svg
url: https://photos.mvl.sh url: https://photos.mvl.sh
id: 3_1035_immich id: 3_1035_immich
statusCheck: true statusCheck: true
- title: Nextcloud - title: Nextcloud
icon: http://mennos-desktop:4000/assets/nextcloud.svg icon: http://mennos-server:4000/assets/nextcloud.svg
url: https://drive.mvl.sh url: https://drive.mvl.sh
id: 3_1035_nxtcld id: 3_1035_nxtcld
statusCheck: true statusCheck: true
- title: ComfyUI
icon: http://mennos-server:8188/assets/favicon.ico
url: http://mennos-server:8188
statusCheckUrl: http://host.docker.internal:8188/api/system_stats
id: 3_1035_comfyui
statusCheck: true
displayData: displayData:
sortBy: default sortBy: default
rows: 1 rows: 1
@@ -38,17 +45,21 @@ sections:
- name: Media Management - name: Media Management
items: items:
- title: Sonarr - title: Sonarr
icon: http://mennos-desktop:4000/assets/sonarr.svg icon: http://mennos-server:4000/assets/sonarr.svg
url: http://go/sonarr url: http://go/sonarr
id: 0_1533_sonarr id: 0_1533_sonarr
- title: Radarr - title: Radarr
icon: http://mennos-desktop:4000/assets/radarr.svg icon: http://mennos-server:4000/assets/radarr.svg
url: http://go/radarr url: http://go/radarr
id: 1_1533_radarr id: 1_1533_radarr
- title: Prowlarr - title: Prowlarr
icon: http://mennos-desktop:4000/assets/prowlarr.svg icon: http://mennos-server:4000/assets/prowlarr.svg
url: http://go/prowlarr url: http://go/prowlarr
id: 2_1533_prowlarr id: 2_1533_prowlarr
- title: Tdarr
icon: http://mennos-server:4000/assets/tdarr.png
url: http://go/tdarr
id: 3_1533_tdarr
- name: Kagi - name: Kagi
items: items:
- title: Kagi Search - title: Kagi Search
@@ -66,7 +77,7 @@ sections:
- name: News - name: News
items: items:
- title: Nu.nl - title: Nu.nl
icon: http://mennos-desktop:4000/assets/nunl.svg icon: http://mennos-server:4000/assets/nunl.svg
url: https://www.nu.nl/ url: https://www.nu.nl/
id: 0_380_nu id: 0_380_nu
- title: Tweakers.net - title: Tweakers.net
@@ -80,7 +91,7 @@ sections:
- name: Downloaders - name: Downloaders
items: items:
- title: qBittorrent - title: qBittorrent
icon: http://mennos-desktop:4000/assets/qbittorrent.svg icon: http://mennos-server:4000/assets/qbittorrent.svg
url: http://go/qbit url: http://go/qbit
id: 0_1154_qbittorrent id: 0_1154_qbittorrent
tags: tags:
@@ -88,7 +99,7 @@ sections:
- torrent - torrent
- yarr - yarr
- title: Sabnzbd - title: Sabnzbd
icon: http://mennos-desktop:4000/assets/sabnzbd.svg icon: http://mennos-server:4000/assets/sabnzbd.svg
url: http://go/sabnzbd url: http://go/sabnzbd
id: 1_1154_sabnzbd id: 1_1154_sabnzbd
tags: tags:
@@ -98,7 +109,7 @@ sections:
- name: Git - name: Git
items: items:
- title: GitHub - title: GitHub
icon: http://mennos-desktop:4000/assets/github.svg icon: http://mennos-server:4000/assets/github.svg
url: https://github.com/vleeuwenmenno url: https://github.com/vleeuwenmenno
id: 0_292_github id: 0_292_github
tags: tags:
@@ -106,7 +117,7 @@ sections:
- git - git
- hub - hub
- title: Gitea - title: Gitea
icon: http://mennos-desktop:4000/assets/gitea.svg icon: http://mennos-server:4000/assets/gitea.svg
url: http://git.mvl.sh/vleeuwenmenno url: http://git.mvl.sh/vleeuwenmenno
id: 1_292_gitea id: 1_292_gitea
tags: tags:
@@ -116,14 +127,14 @@ sections:
- name: Server Monitoring - name: Server Monitoring
items: items:
- title: Beszel - title: Beszel
icon: http://mennos-desktop:4000/assets/beszel.svg icon: http://mennos-server:4000/assets/beszel.svg
url: http://go/beszel url: http://go/beszel
tags: tags:
- monitoring - monitoring
- logs - logs
id: 0_1725_beszel id: 0_1725_beszel
- title: Dozzle - title: Dozzle
icon: http://mennos-desktop:4000/assets/dozzle.svg icon: http://mennos-server:4000/assets/dozzle.svg
url: http://go/dozzle url: http://go/dozzle
id: 1_1725_dozzle id: 1_1725_dozzle
tags: tags:
@@ -139,25 +150,43 @@ sections:
- name: Tools - name: Tools
items: items:
- title: Home Assistant - title: Home Assistant
icon: http://mennos-desktop:4000/assets/home-assistant.svg icon: http://mennos-server:4000/assets/home-assistant.svg
url: http://go/homeassistant url: http://go/homeassistant
id: 0_529_homeassistant id: 0_529_homeassistant
- title: Tailscale - title: Tailscale
icon: http://mennos-desktop:4000/assets/tailscale.svg icon: http://mennos-server:4000/assets/tailscale.svg
url: http://go/tailscale url: http://go/tailscale
id: 1_529_tailscale id: 1_529_tailscale
- title: GliNet KVM - title: GliNet KVM
icon: http://mennos-desktop:4000/assets/glinet.svg icon: http://mennos-server:4000/assets/glinet.svg
url: http://go/glkvm url: http://go/glkvm
id: 2_529_glinetkvm id: 2_529_glinetkvm
- title: Unifi Network Controller - title: Unifi Network Controller
icon: http://mennos-desktop:4000/assets/unifi.svg icon: http://mennos-server:4000/assets/unifi.svg
url: http://go/unifi url: http://go/unifi
id: 3_529_unifinetworkcontroller id: 3_529_unifinetworkcontroller
- title: Dashboard Icons - title: Dashboard Icons
icon: favicon icon: favicon
url: https://dashboardicons.com/ url: https://dashboardicons.com/
id: 4_529_dashboardicons id: 4_529_dashboardicons
- name: Weather
items:
- title: Buienradar
icon: favicon
url: https://www.buienradar.nl/weer/Beverwijk/NL/2758998
id: 0_529_buienradar
- title: ClearOutside
icon: favicon
url: https://clearoutside.com/forecast/52.49/4.66
id: 1_529_clearoutside
- title: Windy
icon: favicon
url: https://www.windy.com/
id: 2_529_windy
- title: Meteoblue
icon: favicon
url: https://www.meteoblue.com/en/country/weather/radar/the-netherlands_the-netherlands_2750405
id: 2_529_meteoblue
- name: DiscountOffice - name: DiscountOffice
displayData: displayData:
sortBy: default sortBy: default
@@ -207,7 +236,7 @@ sections:
- discount - discount
- work - work
- title: Proxmox - title: Proxmox
icon: http://mennos-desktop:4000/assets/proxmox.svg icon: http://mennos-server:4000/assets/proxmox.svg
url: https://www.transip.nl/cp/vps/prm/350680/ url: https://www.transip.nl/cp/vps/prm/350680/
id: 5_1429_proxmox id: 5_1429_proxmox
tags: tags:
@@ -223,38 +252,21 @@ sections:
- discount - discount
- work - work
- title: Kibana - title: Kibana
icon: http://mennos-desktop:4000/assets/kibana.svg icon: http://mennos-server:4000/assets/kibana.svg
url: http://go/kibana url: http://go/kibana
id: 7_1429_kibana id: 7_1429_kibana
tags: tags:
- do - do
- discount - discount
- work - work
- name: Other
items:
- title: Whisparr
icon: http://mennos-desktop:4000/assets/whisparr.svg
url: http://go/whisparr
id: 0_514_whisparr
- title: Stash
icon: http://mennos-desktop:4000/assets/stash.svg
url: http://go/stash
id: 1_514_stash
displayData:
sortBy: default
rows: 1
cols: 1
collapsed: true
hideForGuests: true
appConfig: appConfig:
layout: auto layout: auto
iconSize: large iconSize: large
theme: nord theme: nord
startingView: default startingView: default
defaultOpeningMethod: newtab defaultOpeningMethod: sametab
statusCheck: false statusCheck: false
statusCheckInterval: 0 statusCheckInterval: 0
faviconApi: local
routingMode: history routingMode: history
enableMultiTasking: false enableMultiTasking: false
widgetsAlwaysUseProxy: false widgetsAlwaysUseProxy: false

View File

@@ -8,6 +8,8 @@ services:
- {{dashy_data_dir}}/:/app/user-data - {{dashy_data_dir}}/:/app/user-data
networks: networks:
- caddy_network - caddy_network
extra_hosts:
- host.docker.internal:host-gateway
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -11,7 +11,6 @@ services:
- 6881:6881 - 6881:6881
- 6881:6881/udp - 6881:6881/udp
- 8085:8085 # Qbittorrent - 8085:8085 # Qbittorrent
- 7788:8080 # Sabnzbd
devices: devices:
- /dev/net/tun:/dev/net/tun - /dev/net/tun:/dev/net/tun
volumes: volumes:
@@ -39,10 +38,8 @@ services:
- {{ downloaders_data_dir }}/sabnzbd-config:/config - {{ downloaders_data_dir }}/sabnzbd-config:/config
- {{ local_data_dir }}:{{ local_data_dir }} - {{ local_data_dir }}:{{ local_data_dir }}
restart: unless-stopped restart: unless-stopped
network_mode: "service:gluetun" ports:
depends_on: - 7788:8080
gluetun:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -0,0 +1,15 @@
services:
necesse:
image: brammys/necesse-server
container_name: necesse
restart: unless-stopped
ports:
- "14159:14159/udp"
environment:
- MOTD=StarDebris' Server!
- PASSWORD=2142
- SLOTS=4
- PAUSE=1
volumes:
- {{ necesse_data_dir }}/saves:/necesse/saves
- {{ necesse_data_dir }}/logs:/necesse/logs

View File

@@ -0,0 +1,41 @@
---
- name: Deploy Necesse service
block:
- name: Set Necesse directories
ansible.builtin.set_fact:
necesse_service_dir: "{{ ansible_env.HOME }}/.services/necesse"
necesse_data_dir: "/mnt/services/necesse"
- name: Create Necesse service directory
ansible.builtin.file:
path: "{{ necesse_service_dir }}"
state: directory
mode: "0755"
- name: Create Necesse data directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ necesse_data_dir }}"
- "{{ necesse_data_dir }}/saves"
- "{{ necesse_data_dir }}/logs"
- name: Deploy Necesse docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ necesse_service_dir }}/docker-compose.yml"
mode: "0644"
register: necesse_compose
- name: Stop Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" down --remove-orphans
when: necesse_compose.changed
- name: Start Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" up -d
when: necesse_compose.changed
tags:
- services
- necesse

View File

@@ -14,9 +14,10 @@ services:
volumes: volumes:
- {{ plex_data_dir }}/config:/config - {{ plex_data_dir }}/config:/config
- {{ plex_data_dir }}/transcode:/transcode - {{ plex_data_dir }}/transcode:/transcode
- {{ '/mnt/data/movies' }}:/movies - /mnt/data/movies:/movies
- {{ '/mnt/data/tvshows' }}:/tvshows - /mnt/data/tvshows:/tvshows
- {{ '/mnt/data/music' }}:/music - /mnt/object_storage/tvshows:/tvshows_slow
- /mnt/data/music:/music
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -0,0 +1,17 @@
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
volumes:
- /mnt/services/qdrant:/qdrant/storage
deploy:
resources:
limits:
memory: 2G

View File

@@ -0,0 +1,32 @@
- name: Deploy Qdrant service
tags:
- services
- qdrant
block:
- name: Set Qdrant directories
ansible.builtin.set_fact:
qdrant_service_dir: "{{ ansible_env.HOME }}/.services/qdrant"
qdrant_data_dir: "/mnt/services/qdrant"
- name: Create Qdrant directory
ansible.builtin.file:
path: "{{ qdrant_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Qdrant docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ qdrant_service_dir }}/docker-compose.yml"
mode: "0644"
notify: restart_qdrant
- name: Stop Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
listen: restart_qdrant
- name: Start Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" up -d
changed_when: false
listen: restart_qdrant

View File

@@ -34,6 +34,7 @@
register: juicefs_stop register: juicefs_stop
changed_when: juicefs_stop.changed changed_when: juicefs_stop.changed
when: redis_compose.changed and juicefs_service_stat.stat.exists when: redis_compose.changed and juicefs_service_stat.stat.exists
become: true
- name: List containers that are running - name: List containers that are running
ansible.builtin.command: docker ps -q ansible.builtin.command: docker ps -q
@@ -68,6 +69,7 @@
register: juicefs_start register: juicefs_start
changed_when: juicefs_start.changed changed_when: juicefs_start.changed
when: juicefs_service_stat.stat.exists when: juicefs_service_stat.stat.exists
become: true
- name: Restart containers that were stopped - name: Restart containers that were stopped
ansible.builtin.command: docker start {{ item }} ansible.builtin.command: docker start {{ item }}

View File

@@ -0,0 +1,53 @@
# Production Environment Variables
# Copy this to .env and fill in your values
# Database configuration (PostgreSQL)
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USER=sathub
DB_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DB_PASSWORD') }}
DB_NAME=sathub
# Required: JWT secret for token signing
JWT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='JWT_SECRET') }}
# Required: Two-factor authentication encryption key
TWO_FA_ENCRYPTION_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='TWO_FA_ENCRYPTION_KEY') }}
# Email configuration (required for password resets)
SMTP_HOST={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_HOST') }}
SMTP_PORT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PORT') }}
SMTP_USERNAME={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_USERNAME') }}
SMTP_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PASSWORD') }}
SMTP_FROM_EMAIL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_FROM_EMAIL') }}
# MinIO Object Storage configuration
MINIO_ROOT_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_ROOT_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# Basically the same as the above
MINIO_ACCESS_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_SECRET_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# GitHub credentials for Watchtower (auto-updates)
GITHUB_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
GITHUB_PAT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
REPO_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
REPO_PASS={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
# Optional: Override defaults if needed
# GIN_MODE=release (set automatically)
FRONTEND_URL=https://sathub.de
# CORS configuration (optional - additional allowed origins)
CORS_ALLOWED_ORIGINS=https://sathub.de,https://sathub.nl,https://api.sathub.de
# Frontend configuration (optional - defaults are provided)
VITE_API_BASE_URL=https://api.sathub.de
VITE_ALLOWED_HOSTS=sathub.de,sathub.nl
# Discord related messsaging
DISCORD_CLIENT_ID={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_ID') }}
DISCORD_CLIENT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_SECRET') }}
DISCORD_REDIRECT_URI={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_REDIRECT_URL') }}
DISCORD_WEBHOOK_URL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_WEBHOOK_URL') }}

View File

@@ -0,0 +1,182 @@
services:
# Migration service - runs once on stack startup
migrate:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-migrate
restart: "no"
command: ["./main", "auto-migrate"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
networks:
- sathub
depends_on:
- postgres
backend:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-backend
restart: unless-stopped
command: ["./main", "api"]
environment:
- GIN_MODE=release
- FRONTEND_URL=${FRONTEND_URL:-https://sathub.de}
- CORS_ALLOWED_ORIGINS=${CORS_ALLOWED_ORIGINS:-https://sathub.de}
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# Security settings
- JWT_SECRET=${JWT_SECRET}
- TWO_FA_ENCRYPTION_KEY=${TWO_FA_ENCRYPTION_KEY}
# SMTP settings
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
- caddy_network
depends_on:
migrate:
condition: service_completed_successfully
worker:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-worker
restart: unless-stopped
command: ["./main", "worker"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# SMTP settings (needed for notifications)
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
depends_on:
migrate:
condition: service_completed_successfully
postgres:
image: postgres:15-alpine
container_name: sathub-postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER:-sathub}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-sathub}
volumes:
- {{ sathub_data_dir }}/postgres_data:/var/lib/postgresql/data
networks:
- sathub
frontend:
image: ghcr.io/vleeuwenmenno/sathub-frontend/frontend:latest
container_name: sathub-frontend
restart: unless-stopped
environment:
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-https://api.sathub.de}
- VITE_ALLOWED_HOSTS=${VITE_ALLOWED_HOSTS:-sathub.de,sathub.nl}
networks:
- sathub
- caddy_network
minio:
image: minio/minio
container_name: sathub-minio
restart: unless-stopped
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
volumes:
- {{ sathub_data_dir }}/minio_data:/data
command: server /data --console-address :9001
networks:
- sathub
depends_on:
- postgres
watchtower:
image: containrrr/watchtower:latest
container_name: sathub-watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_STOPPED=false
- REPO_USER=${REPO_USER}
- REPO_PASS=${REPO_PASS}
command: --interval 30 --cleanup --include-stopped=false sathub-backend sathub-worker sathub-frontend
networks:
- sathub
networks:
sathub:
driver: bridge
# We assume you're running a Caddy instance in a separate compose file with this network
# If not, you can remove this network and the related depends_on in the services above
# But the stack is designed to run behind a Caddy reverse proxy for SSL termination and routing
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,50 @@
---
- name: Deploy SatHub service
block:
- name: Set SatHub directories
ansible.builtin.set_fact:
sathub_service_dir: "{{ ansible_env.HOME }}/.services/sathub"
sathub_data_dir: "/mnt/services/sathub"
- name: Set SatHub frontend configuration
ansible.builtin.set_fact:
frontend_api_base_url: "https://api.sathub.de"
frontend_allowed_hosts: "sathub.de,sathub.nl"
cors_allowed_origins: "https://sathub.nl,https://api.sathub.de,https://obj.sathub.de"
- name: Create SatHub directory
ansible.builtin.file:
path: "{{ sathub_service_dir }}"
state: directory
mode: "0755"
- name: Create SatHub data directory
ansible.builtin.file:
path: "{{ sathub_data_dir }}"
state: directory
mode: "0755"
- name: Deploy SatHub .env
ansible.builtin.template:
src: .env.j2
dest: "{{ sathub_service_dir }}/.env"
mode: "0644"
register: sathub_env
- name: Deploy SatHub docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ sathub_service_dir }}/docker-compose.yml"
mode: "0644"
register: sathub_compose
- name: Stop SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" down --remove-orphans
when: sathub_compose.changed or sathub_env.changed
- name: Start SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" up -d
when: sathub_compose.changed or sathub_env.changed
tags:
- services
- sathub

View File

@@ -31,11 +31,6 @@
- name: Define system desired Flatpaks - name: Define system desired Flatpaks
ansible.builtin.set_fact: ansible.builtin.set_fact:
desired_system_flatpaks: desired_system_flatpaks:
# GNOME Software
- "{{ 'org.gnome.Extensions' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Weather' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Sudoku' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
# Games # Games
- io.github.openhv.OpenHV - io.github.openhv.OpenHV
- info.beyondallreason.bar - info.beyondallreason.bar
@@ -46,18 +41,20 @@
# Multimedia # Multimedia
- com.plexamp.Plexamp - com.plexamp.Plexamp
- tv.plex.PlexDesktop - tv.plex.PlexDesktop
- com.spotify.Client
# Messaging # Messaging
- com.rtosta.zapzap - com.rtosta.zapzap
- org.telegram.desktop - org.telegram.desktop
- org.signal.Signal - org.signal.Signal
- com.spotify.Client - com.discordapp.Discord
# Nextcloud Compatible Utilities # 3D Printing
- io.github.mrvladus.List - com.bambulab.BambuStudio
- org.gnome.World.Iotas - io.mango3d.LycheeSlicer
# Utilities # Utilities
- com.fastmail.Fastmail
- com.ranfdev.DistroShelf - com.ranfdev.DistroShelf
- io.missioncenter.MissionCenter - io.missioncenter.MissionCenter
- io.gitlab.elescoute.spacelaunch - io.gitlab.elescoute.spacelaunch
@@ -77,6 +74,8 @@
- io.github.flattool.Ignition - io.github.flattool.Ignition
- io.github.bytezz.IPLookup - io.github.bytezz.IPLookup
- org.gaphor.Gaphor - org.gaphor.Gaphor
- io.dbeaver.DBeaverCommunity
- com.jetpackduba.Gitnuro
- name: Define system desired Flatpak remotes - name: Define system desired Flatpak remotes
ansible.builtin.set_fact: ansible.builtin.set_fact:

Some files were not shown because too many files have changed in this diff Show More