Compare commits

...

67 Commits

Author SHA1 Message Date
fd6e7d7a86 Update flake.lock
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-30 16:22:07 +01:00
b23536ecc7 chore: adds discord and gitnuro flatpaks
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-10-30 16:22:03 +01:00
14e9c8d51c chore: remove old stuff
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 7s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-10-30 16:21:17 +01:00
c1c98fa007 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-28 08:36:44 +01:00
9c6e6fdf47 Add Vicinae installation and assets Ansible task
Include Vicinae setup in workstation playbook for non-WSL2 systems

Update flake.lock to newer nixpkgs revision
2025-10-28 08:36:26 +01:00
a11376fe96 Add monitoring countries to allowed_countries_codes list
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-26 00:24:17 +00:00
e14dd1d224 Add EU and trusted country lists for Caddy access control
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 54s
Python Lint Check / check-python (push) Successful in 21s
Define separate lists for EU and trusted countries in group vars. Update
Caddyfile template to support EU, trusted, and combined allow lists.
Switch Sathub domains to use combined country allow list.
2025-10-26 00:21:27 +00:00
5353981555 Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 42s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:09:31 +00:00
f9ce652dfc flake lock
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-26 00:09:15 +00:00
fe9dbca2db Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 02:08:31 +02:00
987166420a Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:06:13 +00:00
8ba47c2ebf Fix indentation in server.yml and add necesse service
Add become: true to JuiceFS stop/start tasks in redis.yml
2025-10-26 00:04:51 +00:00
8bfd8395f5 Add Discord environment variables and update data volumes paths 2025-10-26 00:04:41 +00:00
f0b15f77a1 Update nixpkgs input to latest commit 2025-10-26 00:04:19 +00:00
461d251356 Add Ansible role to deploy Necesse server with Docker 2025-10-26 00:04:14 +00:00
e57e9ee67c chore: update country allow list and add European allow option 2025-10-26 02:02:46 +02:00
f67b16f593 update flake locvk 2025-10-26 02:02:28 +02:00
5edd7c413e Update bash.nix to improve WSL Windows alias handling 2025-10-26 02:02:21 +02:00
cfc1188b5f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-23 13:43:38 +02:00
e2701dcdf4 Set executable permission for equibop.desktop and update bash.nix
Add BUN_INSTALL env var and include Bun bin in PATH
2025-10-23 13:43:26 +02:00
11af7f16e5 Set formatter to prettier and update format_on_save option 2025-10-23 13:38:16 +02:00
310fb92ec9 Add WSL aliases for Windows SSH and Zed
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 51s
Python Lint Check / check-python (push) Successful in 15s
2025-10-23 04:20:15 +02:00
fb1661386b chore: add Bun install path and prepend to PATH
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 8s
2025-10-22 17:57:12 +02:00
e1b07a6edf Add WSL support and fix config formatting
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 1m17s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-22 16:18:08 +02:00
f6a3f6d379 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles 2025-10-21 10:06:20 +02:00
77424506d6 Update Nextcloud config and flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 0s
Nix Format Check / check-format (push) Failing after 0s
Python Lint Check / check-python (push) Failing after 0s
2025-10-20 11:27:21 +02:00
1856b2fb9e adds fastmail app as flatpak 2025-10-20 11:27:00 +02:00
2173e37c0a refactor: update configuration for mennos-server and adjust related tasks
Some checks failed
Nix Format Check / check-format (push) Successful in 1m22s
Python Lint Check / check-python (push) Successful in 25s
Ansible Lint Check / check-ansible (push) Failing after 1h7m12s
2025-10-16 14:53:32 +02:00
ba2faf114d chore: update sathub config
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Successful in 5s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 15:04:46 +02:00
22b308803c fixes
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 6s
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
2dfde555dd sathub fixes
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-08 13:10:15 +02:00
436deb267e Add smart alias configuration for rtlsdr 2025-10-08 13:01:37 +02:00
e490405dc5 Update mennos-rtlsdr-pc home configuration to enable service
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:54:34 +02:00
1485f6c430 Add home configuration for mennos-rtlsdr-pc
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-10-08 12:38:12 +02:00
4c83707a03 Update Ansible inventory and playbook for new workstation; modify Git configuration for rebase settings
Some checks failed
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
Ansible Lint Check / check-ansible (push) Has been cancelled
2025-10-08 12:37:59 +02:00
f9f37f5819 Update flatpaks.yml
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-30 12:02:26 +02:00
44c4521cbe Remove unnecessary blank line before sathub.nl configuration in Caddyfile
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m10s
Python Lint Check / check-python (push) Successful in 6s
2025-09-29 02:53:35 +02:00
6c37372bc0 Remove unused obj.sathub.de configuration and caddy_network from MinIO service in Docker Compose
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m11s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 02:40:25 +02:00
3a22417315 Add CORS configuration to SatHub service for improved API access
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 8s
2025-09-29 01:29:55 +02:00
95bc4540db Add SatHub service deployment with Docker Compose and configuration
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Successful in 7s
2025-09-29 01:21:41 +02:00
902d797480 Refactor Cloudreve restart logic and update configs
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Successful in 5s
- Refactor Cloudreve tasks to use conditional restart
- Remove unused displayData from Dashy config
- Add NVM and Japanese input setup to bash.nix
2025-09-25 22:33:57 +02:00
e494369d11 Refactor formatting in update.py for improved readability
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 3s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Successful in 6s
2025-09-24 18:40:25 +02:00
78f3133a1d Fix formatting in Python workflow and update .gitignore to include Ansible files
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 18:35:53 +02:00
d28c0fce66 Refactor shell aliases to move folder navigation aliases to the utility section
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 27s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 18:32:05 +02:00
c6449affcc Rename zed.jsonc.j2 to zed.jsonc and fix trailing commas
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m8s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:12:34 +02:00
d33f367c5f Move Zed config to Ansible template with 1Password secrets
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 16:10:44 +02:00
e5723e0964 Update zed.jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m7s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 16:04:45 +02:00
0bc609760c change zed settings to use jsonc
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m15s
Python Lint Check / check-python (push) Failing after 5s
2025-09-24 13:36:10 +02:00
edd8e90fec Add JetBrains Toolbox autostart and update Zed config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m12s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 13:24:43 +02:00
ee0c73f6de chore: add ssh config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 6s
2025-09-24 11:55:46 +02:00
60dd31fd1c Add --system flag to update system packages in update.py
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 17:26:44 +02:00
cc917eb375 Refactor bash config and env vars, set Zed as git editor
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
- Move environment variable exports from sessionVariables to bashrc
- Add more robust sourcing of .profile and .bashrc.local
- Improve SSH_AUTH_SOCK override logic for 1Password
- Remove redundant path and pyenv logic from profileExtra
- Set git core.editor to "zed" instead of "nvim"
- Add DOTFILES_PATH to global session variables
2025-09-23 17:13:24 +02:00
df0775f3b2 Update symlinks.yml
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 16:39:31 +02:00
5f312d3128 wtf 2025-09-23 16:36:08 +02:00
497fca49d9 linting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 1m18s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:29:47 +00:00
e3ea18c9da updated file
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 7s
Nix Format Check / check-format (push) Successful in 1m16s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:20:57 +02:00
6fcabcd1f3 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 11s
Nix Format Check / check-format (push) Successful in 1m17s
Python Lint Check / check-python (push) Failing after 8s
2025-09-23 16:16:09 +02:00
3e25210f4c remove stash, add bazarr, add cloudreve 2025-09-23 16:13:09 +02:00
5ff84a4c0d Remove GNOME extension management from workstation setup
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m13s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 14:09:30 +00:00
29a439d095 Add isServer option and conditionally enable Git signing
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 14:07:10 +00:00
cfb80bd819 linting 2025-09-23 14:06:26 +00:00
8971d087a3 Remove secrets and auto-start actions and update imports
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:59:48 +00:00
40063cfe6b Refactor for consistent string quoting and formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:53:29 +00:00
2e5a06e9d5 Remove mennos-vm from inventory and playbook tasks
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:51:42 +00:00
80ea4cd51b Remove VSCode config and update Zed symlink and settings
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 7s
- Delete VSCode settings and argv files
- Rename Zed settings file and update symlink destination
- Add new Zed context servers and projects
- Change icon and theme settings for Zed
- Add .gitkeep to autostart directory
2025-09-23 13:39:09 +00:00
c659c599f4 fixed formatting
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 1m14s
Python Lint Check / check-python (push) Failing after 6s
2025-09-23 13:35:37 +00:00
54fc080ef2 Remove debug tasks from global.yml and update git signing config
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Failing after 1m14s
Python Lint Check / check-python (push) Failing after 7s
2025-09-23 13:32:48 +00:00
71 changed files with 2090 additions and 1265 deletions

View File

@@ -3,7 +3,7 @@ name: Python Lint Check
on: on:
pull_request: pull_request:
push: push:
branches: [ master ] branches: [master]
jobs: jobs:
check-python: check-python:
@@ -29,7 +29,7 @@ jobs:
exit 0 exit 0
fi fi
pylint $python_files pylint --exit-zero $python_files
- name: Check Black formatting - name: Check Black formatting
run: | run: |

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
logs/* logs/*
**/__pycache__/ **/__pycache__/
.ansible/
.ansible/.lock

View File

@@ -1,16 +1,13 @@
# Setup # Setup
This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux. This dotfiles is intended to be used with either Fedora 40>, Ubuntu 20.04> or Arch Linux.
Please install a clean version of either distro with GNOME and then follow the steps below. Please install a clean version of either distro and then follow the steps below.
## Installation ## Installation
### 0. Install distro ### 0. Install distro
Download the latest ISO from your desired distro and write it to a USB stick. Download the latest ISO from your desired distro and write it to a USB stick.
I'd recommend getting the GNOME version as it's easier to setup unless you're planning on setting up a server, in that case I recommend getting the server ISO for the specific distro.
#### Note: If you intend on using a desktop environment you should select the GNOME version as this dotfiles repository expects the GNOME desktop environment for various configurations
### 1. Clone dotfiles to home directory ### 1. Clone dotfiles to home directory
@@ -44,15 +41,6 @@ Run the `dotf update` command, although the setup script did most of the work so
dotf update dotf update
``` ```
### 5. Decrypt secrets
Either using 1Password or by manualling providing the decryption key you should decrypt the secrets.
Various configurations depend on the secrets to be decrypted such as the SSH keys, yubikey pam configuration and more.
```bash
dotf secrets decrypt
```
### 6. Profit ### 6. Profit
You should now have a fully setup system with all the configurations applied. You should now have a fully setup system with all the configurations applied.
@@ -71,6 +59,7 @@ If you add a new system you should add the relevant files to these paths.
In case you reboot a server, it's likely that this runs JuiceFS. In case you reboot a server, it's likely that this runs JuiceFS.
To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online. To be sure that every service is properly accessing JuiceFS mounted files you should probably restart the services once when the server comes online.
```bash ```bash
dotf service stop --all dotf service stop --all
df # confirm JuiceFS is mounted df # confirm JuiceFS is mounted
@@ -81,16 +70,19 @@ dotf service start --all
In case you need to adjust anything regarding the /mnt/object_storage JuiceFS. In case you need to adjust anything regarding the /mnt/object_storage JuiceFS.
Ensure to shut down all services: Ensure to shut down all services:
```bash ```bash
dotf service stop --all dotf service stop --all
``` ```
Unmount the volume: Unmount the volume:
```bash ```bash
sudo systemctl stop juicefs sudo systemctl stop juicefs
``` ```
And optionally if you're going to do something with metadata you might need to stop redis too. And optionally if you're going to do something with metadata you might need to stop redis too.
```bash ```bash
cd ~/services/juicefs-redis/ cd ~/services/juicefs-redis/
docker compose down --remove-orphans docker compose down --remove-orphans
@@ -103,6 +95,7 @@ To add a new system you should follow these steps:
1. Add the relevant files shown in the section above. 1. Add the relevant files shown in the section above.
2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system. 2. Ensure you've either updated or added the `$HOME/.hostname` file with the hostname of the system.
3. Run `dotf update` to ensure the symlinks are properly updated/created. 3. Run `dotf update` to ensure the symlinks are properly updated/created.
--- ---
## Using 1Password SSH Agent with WSL2 (Windows 11) ## Using 1Password SSH Agent with WSL2 (Windows 11)
@@ -132,5 +125,6 @@ This setup allows you to use your 1Password-managed SSH keys inside WSL2. The WS
- If your 1Password keys are listed, the setup is complete. - If your 1Password keys are listed, the setup is complete.
#### References #### References
- [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m) - [Using 1Password's SSH Agent with WSL2](https://dev.to/d4vsanchez/use-1password-ssh-agent-in-wsl-2j6m)
- [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows) - [How to change the PATH environment variable in Windows](https://www.wikihow.com/Change-the-PATH-Environment-Variable-on-Windows)

View File

@@ -2,30 +2,81 @@
flatpaks: false flatpaks: false
install_ui_apps: false install_ui_apps: false
# European countries for EU-specific access control
eu_countries_codes:
- AL # Albania
- AD # Andorra
- AM # Armenia
- AT # Austria
- AZ # Azerbaijan
# - BY # Belarus (Belarus is disabled due to geopolitical reasons)
- BE # Belgium
- BA # Bosnia and Herzegovina
- BG # Bulgaria
- HR # Croatia
- CY # Cyprus
- CZ # Czech Republic
- DK # Denmark
- EE # Estonia
- FI # Finland
- FR # France
- GE # Georgia
- DE # Germany
- GR # Greece
- HU # Hungary
- IS # Iceland
- IE # Ireland
- IT # Italy
- XK # Kosovo
- LV # Latvia
- LI # Liechtenstein
- LT # Lithuania
- LU # Luxembourg
- MK # North Macedonia
- MT # Malta
- MD # Moldova
- MC # Monaco
- ME # Montenegro
- NL # Netherlands
- NO # Norway
- PL # Poland
- PT # Portugal
- RO # Romania
# - RU # Russia (Russia is disabled due to geopolitical reasons)
- SM # San Marino
- RS # Serbia
- SK # Slovakia
- SI # Slovenia
- ES # Spain
- SE # Sweden
- CH # Switzerland
- TR # Turkey
- UA # Ukraine
- GB # United Kingdom
- VA # Vatican City
# Trusted non-EU countries for extended access control
trusted_countries_codes:
- US # United States
- AU # Australia
- NZ # New Zealand
- JP # Japan
# Countries that are allowed to access the server Caddy reverse proxy # Countries that are allowed to access the server Caddy reverse proxy
allowed_countries_codes: allowed_countries_codes:
- US # United States - US # United States
- CA # Canada - GB # United Kingdom
- GB # United Kingdom - DE # Germany
- DE # Germany - FR # France
- FR # France - IT # Italy
- ES # Spain - NL # Netherlands
- IT # Italy - JP # Japan
- NL # Netherlands - KR # South Korea
- AU # Australia - CH # Switzerland
- NZ # New Zealand - AU # Australia (Added for UpDown.io to monitor server uptime)
- JP # Japan - CA # Canada (Added for UpDown.io to monitor server uptime)
- KR # South Korea - FI # Finland (Added for UpDown.io to monitor server uptime)
- SK # Slovakia - SG # Singapore (Added for UpDown.io to monitor server uptime)
- FI # Finland
- DK # Denmark
- SG # Singapore
- AT # Austria
- CH # Switzerland
# IP ranges for blocked countries (generated automatically)
# This will be populated by the country blocking script
blocked_countries: []
# Enable/disable country blocking globally # Enable/disable country blocking globally
enable_country_blocking: true enable_country_blocking: true

View File

@@ -4,5 +4,8 @@ mennos-desktop ansible_connection=local
[servers] [servers]
mennos-vps ansible_connection=local mennos-vps ansible_connection=local
mennos-vm ansible_connection=local mennos-server ansible_connection=local
mennos-desktop ansible_connection=local mennos-rtlsdr-pc ansible_connection=local
[wsl]
mennos-desktopw ansible_connection=local

View File

@@ -2,18 +2,18 @@
- name: Configure all hosts - name: Configure all hosts
hosts: all hosts: all
handlers: handlers:
- name: Import handler tasks - name: Import handler tasks
ansible.builtin.import_tasks: handlers/main.yml ansible.builtin.import_tasks: handlers/main.yml
gather_facts: true gather_facts: true
tasks: tasks:
- name: Include global tasks - name: Include global tasks
ansible.builtin.import_tasks: tasks/global/global.yml ansible.builtin.import_tasks: tasks/global/global.yml
- name: Include workstation tasks - name: Include workstation tasks
ansible.builtin.import_tasks: tasks/workstations/workstation.yml ansible.builtin.import_tasks: tasks/workstations/workstation.yml
when: inventory_hostname in ['mennos-laptop', 'mennos-desktop'] when: inventory_hostname in ['mennos-laptop', 'mennos-desktop']
- name: Include server tasks - name: Include server tasks
ansible.builtin.import_tasks: tasks/servers/server.yml ansible.builtin.import_tasks: tasks/servers/server.yml
when: inventory_hostname in ['mennos-server', 'mennos-hobbypc', 'mennos-vm', 'mennos-desktop'] when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc', 'mennos-desktopw']

View File

@@ -1,21 +1,9 @@
--- ---
- name: Include global symlinks tasks
ansible.builtin.import_tasks: tasks/global/symlinks.yml
- name: Gather package facts - name: Gather package facts
ansible.builtin.package_facts: ansible.builtin.package_facts:
manager: auto manager: auto
become: true become: true
- name: Debug ansible_facts for troubleshooting
ansible.builtin.debug:
msg: |
OS Family: {{ ansible_facts['os_family'] }}
Distribution: {{ ansible_facts['distribution'] }}
Package Manager: {{ ansible_pkg_mgr }}
Kernel: {{ ansible_kernel }}
tags: debug
- name: Include Tailscale tasks - name: Include Tailscale tasks
ansible.builtin.import_tasks: tasks/global/tailscale.yml ansible.builtin.import_tasks: tasks/global/tailscale.yml
become: true become: true
@@ -131,7 +119,7 @@
ansible.builtin.replace: ansible.builtin.replace:
path: /etc/sudoers path: /etc/sudoers
regexp: '^Defaults\s+env_reset(?!.*pwfeedback)' regexp: '^Defaults\s+env_reset(?!.*pwfeedback)'
replace: 'Defaults env_reset,pwfeedback' replace: "Defaults env_reset,pwfeedback"
validate: 'visudo -cf %s' validate: "visudo -cf %s"
become: true become: true
tags: sudoers tags: sudoers

View File

@@ -13,6 +13,12 @@ smart_aliases:
desktop: desktop:
primary: "desktop-local" primary: "desktop-local"
fallback: "desktop" fallback: "desktop"
check_host: "192.168.1.250"
timeout: "2s"
server:
primary: "server-local"
fallback: "server"
check_host: "192.168.1.254" check_host: "192.168.1.254"
timeout: "2s" timeout: "2s"
@@ -22,6 +28,12 @@ smart_aliases:
check_host: "192.168.1.253" check_host: "192.168.1.253"
timeout: "2s" timeout: "2s"
rtlsdr:
primary: "rtlsdr-local"
fallback: "rtlsdr"
check_host: "192.168.1.252"
timeout: "2s"
# Background SSH Tunnel Definitions # Background SSH Tunnel Definitions
tunnels: tunnels:
# Example: Desktop database tunnel # Example: Desktop database tunnel

View File

@@ -30,10 +30,10 @@ type LoggingConfig struct {
// SmartAlias represents a smart SSH alias configuration // SmartAlias represents a smart SSH alias configuration
type SmartAlias struct { type SmartAlias struct {
Primary string `yaml:"primary"` // SSH config host to use when local Primary string `yaml:"primary"` // SSH config host to use when local
Fallback string `yaml:"fallback"` // SSH config host to use when remote Fallback string `yaml:"fallback"` // SSH config host to use when remote
CheckHost string `yaml:"check_host"` // IP to ping for connectivity test CheckHost string `yaml:"check_host"` // IP to ping for connectivity test
Timeout string `yaml:"timeout"` // Ping timeout (default: "2s") Timeout string `yaml:"timeout"` // Ping timeout (default: "2s")
} }
// TunnelDefinition represents a tunnel configuration // TunnelDefinition represents a tunnel configuration
@@ -47,36 +47,39 @@ type TunnelDefinition struct {
// TunnelState represents runtime state of an active tunnel // TunnelState represents runtime state of an active tunnel
type TunnelState struct { type TunnelState struct {
Name string `json:"name"` Name string `json:"name"`
Source string `json:"source"` // "config" or "adhoc" Source string `json:"source"` // "config" or "adhoc"
Type string `json:"type"` // local, remote, dynamic Type string `json:"type"` // local, remote, dynamic
LocalPort int `json:"local_port"` LocalPort int `json:"local_port"`
RemoteHost string `json:"remote_host"` RemoteHost string `json:"remote_host"`
RemotePort int `json:"remote_port"` RemotePort int `json:"remote_port"`
SSHHost string `json:"ssh_host"` SSHHost string `json:"ssh_host"`
SSHHostResolved string `json:"ssh_host_resolved"` // After smart alias resolution SSHHostResolved string `json:"ssh_host_resolved"` // After smart alias resolution
PID int `json:"pid"` PID int `json:"pid"`
Status string `json:"status"` Status string `json:"status"`
StartedAt time.Time `json:"started_at"` StartedAt time.Time `json:"started_at"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
CommandLine string `json:"command_line"` CommandLine string `json:"command_line"`
} }
// Config represents the YAML configuration structure // Config represents the YAML configuration structure
type Config struct { type Config struct {
Logging LoggingConfig `yaml:"logging"` Logging LoggingConfig `yaml:"logging"`
SmartAliases map[string]SmartAlias `yaml:"smart_aliases"` SmartAliases map[string]SmartAlias `yaml:"smart_aliases"`
Tunnels map[string]TunnelDefinition `yaml:"tunnels"` Tunnels map[string]TunnelDefinition `yaml:"tunnels"`
} }
const ( const (
realSSHPath = "/usr/bin/ssh" defaultSSHPath = "/usr/bin/ssh"
wslSSHPath = "ssh.exe"
wslDetectPath = "/mnt/c/Windows/System32/cmd.exe"
) )
var ( var (
configDir string configDir string
tunnelsDir string tunnelsDir string
config *Config config *Config
sshPath string // Will be set based on WSL2 detection
// Global flags // Global flags
tunnelMode bool tunnelMode bool
@@ -92,10 +95,10 @@ var (
) )
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "ssh", Use: "ssh",
Short: "Smart SSH utility with tunnel management", Short: "Smart SSH utility with tunnel management",
Long: "A transparent SSH wrapper that provides smart alias resolution and background tunnel management", Long: "A transparent SSH wrapper that provides smart alias resolution and background tunnel management",
Run: handleSSH, Run: handleSSH,
DisableFlagParsing: true, DisableFlagParsing: true,
} }
@@ -103,13 +106,16 @@ var tunnelCmd = &cobra.Command{
Use: "tunnel [tunnel-name]", Use: "tunnel [tunnel-name]",
Short: "Manage background SSH tunnels", Short: "Manage background SSH tunnels",
Long: "Create, list, and manage persistent SSH tunnels in the background", Long: "Create, list, and manage persistent SSH tunnels in the background",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
handleTunnelManual(append([]string{"--tunnel"}, args...)) handleTunnelManual(append([]string{"--tunnel"}, args...))
}, },
Args: cobra.MaximumNArgs(1), Args: cobra.MaximumNArgs(1),
} }
func init() { func init() {
// Detect and set SSH path based on environment (WSL2 vs native Linux)
sshPath = detectSSHPath()
// Initialize config directory // Initialize config directory
homeDir, err := os.UserHomeDir() homeDir, err := os.UserHomeDir()
if err != nil { if err != nil {
@@ -141,6 +147,13 @@ func init() {
// Initialize logging // Initialize logging
initLogging(config.Logging) initLogging(config.Logging)
// Log SSH path detection (after logging is initialized)
if isWSL2() {
log.Debug().Str("ssh_path", sshPath).Msg("WSL2 detected, using Windows SSH")
} else {
log.Debug().Str("ssh_path", sshPath).Msg("Native Linux environment, using Linux SSH")
}
// Global flags // Global flags
rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode") rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode")
rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel") rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel")
@@ -169,6 +182,22 @@ func init() {
} }
} }
// detectSSHPath determines the correct SSH binary path based on the environment
func detectSSHPath() string {
if isWSL2() {
// In WSL2, use Windows SSH
return wslSSHPath
}
// Default to Linux SSH
return defaultSSHPath
}
// isWSL2 checks if we're running in WSL2 by looking for Windows System32
func isWSL2() bool {
_, err := os.Stat(wslDetectPath)
return err == nil
}
func main() { func main() {
// Check if this is a tunnel command first // Check if this is a tunnel command first
args := os.Args[1:] args := os.Args[1:]
@@ -563,7 +592,7 @@ func openTunnel(name string) error {
log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel") log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel")
// Start SSH process // Start SSH process
cmd := exec.Command(realSSHPath, cmdArgs[1:]...) cmd := exec.Command(sshPath, cmdArgs[1:]...)
// Capture stderr to see any SSH errors // Capture stderr to see any SSH errors
var stderr bytes.Buffer var stderr bytes.Buffer
@@ -708,7 +737,9 @@ func createAdhocTunnel() (TunnelDefinition, error) {
} }
func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string { func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string {
args := []string{"ssh", "-f", "-N"} // Use the detected SSH path basename for the command
sshBinary := filepath.Base(sshPath)
args := []string{sshBinary, "-f", "-N"}
switch tunnel.Type { switch tunnel.Type {
case "local": case "local":
@@ -1056,18 +1087,37 @@ func findSSHProcessByPort(port int) int {
// executeRealSSH executes the real SSH binary with given arguments // executeRealSSH executes the real SSH binary with given arguments
func executeRealSSH(args []string) { func executeRealSSH(args []string) {
// Check if real SSH exists log.Debug().Str("ssh_path", sshPath).Strs("args", args).Msg("Executing real SSH")
if _, err := os.Stat(realSSHPath); os.IsNotExist(err) {
log.Error().Str("path", realSSHPath).Msg("Real SSH binary not found") // In WSL2, we need to use exec.Command instead of syscall.Exec for Windows binaries
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", realSSHPath) if isWSL2() {
cmd := exec.Command(sshPath, args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
if err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
os.Exit(exitErr.ExitCode())
}
log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)
os.Exit(1)
}
os.Exit(0)
}
// For native Linux, check if SSH exists
if _, err := os.Stat(sshPath); os.IsNotExist(err) {
log.Error().Str("path", sshPath).Msg("Real SSH binary not found")
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", sshPath)
os.Exit(1) os.Exit(1)
} }
log.Debug().Str("ssh_path", realSSHPath).Strs("args", args).Msg("Executing real SSH") // Execute the real SSH binary using syscall.Exec (Linux only)
// This replaces the current process (like exec in shell)
// Execute the real SSH binary err := syscall.Exec(sshPath, append([]string{"ssh"}, args...), os.Environ())
// Using syscall.Exec to replace current process (like exec in shell)
err := syscall.Exec(realSSHPath, append([]string{"ssh"}, args...), os.Environ())
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to execute SSH") log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err) fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)

View File

@@ -18,7 +18,7 @@
#!/bin/bash #!/bin/bash
# Run dynamic DNS update (binary compiled by utils.yml) # Run dynamic DNS update (binary compiled by utils.yml)
{{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl" 2>&1 | logger -t dynamic-dns {{ ansible_user_dir }}/.local/bin/dynamic-dns-cf -record "vleeuwen.me,mvl.sh,mennovanleeuwen.nl,sathub.de,sathub.nl" 2>&1 | logger -t dynamic-dns
become: true become: true
- name: Create dynamic DNS systemd timer - name: Create dynamic DNS systemd timer
@@ -83,6 +83,6 @@
- Manual run: sudo /usr/local/bin/dynamic-dns-update.sh - Manual run: sudo /usr/local/bin/dynamic-dns-update.sh
- Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl - Domains: vleeuwen.me, mvl.sh, mennovanleeuwen.nl
when: inventory_hostname == 'mennos-desktop' or inventory_hostname == 'mennos-vps' when: inventory_hostname == 'mennos-server' or inventory_hostname == 'mennos-vps'
tags: tags:
- dynamic-dns - dynamic-dns

View File

@@ -70,7 +70,7 @@
- name: Include JuiceFS Redis tasks - name: Include JuiceFS Redis tasks
ansible.builtin.include_tasks: services/redis/redis.yml ansible.builtin.include_tasks: services/redis/redis.yml
when: inventory_hostname == 'mennos-desktop' when: inventory_hostname == 'mennos-server'
- name: Enable and start JuiceFS service - name: Enable and start JuiceFS service
ansible.builtin.systemd: ansible.builtin.systemd:

View File

@@ -1,157 +1,165 @@
--- ---
- name: Server setup - name: Server setup
block: block:
- name: Ensure openssh-server is installed on Arch-based systems - name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package: ansible.builtin.package:
name: openssh name: openssh
state: present state: present
when: ansible_pkg_mgr == 'pacman' when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems - name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package: ansible.builtin.package:
name: openssh-server name: openssh-server
state: present state: present
when: ansible_pkg_mgr != 'pacman' when: ansible_pkg_mgr != 'pacman'
- name: Ensure Borg is installed on Arch-based systems - name: Ensure Borg is installed on Arch-based systems
ansible.builtin.package: ansible.builtin.package:
name: borg name: borg
state: present state: present
become: true become: true
when: ansible_pkg_mgr == 'pacman' when: ansible_pkg_mgr == 'pacman'
- name: Ensure Borg is installed on Debian/Ubuntu systems - name: Ensure Borg is installed on Debian/Ubuntu systems
ansible.builtin.package: ansible.builtin.package:
name: borgbackup name: borgbackup
state: present state: present
become: true become: true
when: ansible_pkg_mgr != 'pacman' when: ansible_pkg_mgr != 'pacman'
- name: Include JuiceFS tasks - name: Include JuiceFS tasks
ansible.builtin.include_tasks: juicefs.yml ansible.builtin.include_tasks: juicefs.yml
tags: tags:
- juicefs - juicefs
- name: Include Dynamic DNS tasks - name: Include Dynamic DNS tasks
ansible.builtin.include_tasks: dynamic-dns.yml ansible.builtin.include_tasks: dynamic-dns.yml
tags: tags:
- dynamic-dns - dynamic-dns
- name: Include Borg Backup tasks - name: Include Borg Backup tasks
ansible.builtin.include_tasks: borg-backup.yml ansible.builtin.include_tasks: borg-backup.yml
tags: tags:
- borg-backup - borg-backup
- name: Include Borg Local Sync tasks - name: Include Borg Local Sync tasks
ansible.builtin.include_tasks: borg-local-sync.yml ansible.builtin.include_tasks: borg-local-sync.yml
tags: tags:
- borg-local-sync - borg-local-sync
- name: System performance optimizations - name: System performance optimizations
ansible.posix.sysctl: ansible.posix.sysctl:
name: "{{ item.name }}" name: "{{ item.name }}"
value: "{{ item.value }}" value: "{{ item.value }}"
state: present state: present
reload: true reload: true
become: true become: true
loop: loop:
- { name: "fs.file-max", value: "2097152" } # Max open files for the entire system - { name: "fs.file-max", value: "2097152" } # Max open files for the entire system
- { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have - { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have
- { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory - { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory
- { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches - { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches
- { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket - { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket
- { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input - { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input
- { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state - { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state
- { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections - { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections
- name: Include service tasks - name: Include service tasks
ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml" ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml"
loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}" loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}"
loop_control: loop_control:
label: "{{ item.name }}" label: "{{ item.name }}"
tags: tags:
- services - services
- always - always
vars: vars:
services: services:
- name: dashy - name: dashy
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: gitea - name: gitea
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: factorio - name: factorio
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: dozzle - name: dozzle
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: beszel - name: beszel
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: caddy - name: caddy
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: golink - name: golink
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: immich - name: immich
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: plex - name: plex
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: tautulli - name: tautulli
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: stash - name: downloaders
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: downloaders - name: wireguard
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: wireguard - name: nextcloud
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: nextcloud - name: cloudreve
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: echoip - name: echoip
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: arr-stack - name: arr-stack
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: home-assistant - name: home-assistant
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: privatebin - name: privatebin
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: unifi-network-application - name: unifi-network-application
enabled: true enabled: true
hosts: hosts:
- mennos-desktop - mennos-server
- name: avorion - name: avorion
enabled: true enabled: false
hosts: hosts:
- mennos-desktop - mennos-server
- name: sathub
enabled: true
hosts:
- mennos-server
- name: necesse
enabled: true
hosts:
- mennos-server

View File

@@ -44,18 +44,19 @@ services:
limits: limits:
memory: 2G memory: 2G
whisparr: bazarr:
image: ghcr.io/hotio/whisparr:latest image: ghcr.io/hotio/bazarr:latest
container_name: bazarr
environment: environment:
- PUID=1000 - PUID=1000
- PGID=100 - PGID=100
- TZ=Europe/Amsterdam - TZ=Europe/Amsterdam
ports: ports:
- 6969:6969 - 6767:6767
extra_hosts: extra_hosts:
- host.docker.internal:host-gateway - host.docker.internal:host-gateway
volumes: volumes:
- {{ arr_stack_data_dir }}/whisparr-config:/config - {{ arr_stack_data_dir }}/bazarr-config:/config
- /mnt/data:/mnt/data - /mnt/data:/mnt/data
restart: unless-stopped restart: unless-stopped
networks: networks:
@@ -63,7 +64,7 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
memory: 2G memory: 512M
prowlarr: prowlarr:
container_name: prowlarr container_name: prowlarr

View File

@@ -5,9 +5,9 @@
} }
} }
# Country blocking snippet using MaxMind GeoLocation - reusable across all sites # Country allow list snippet using MaxMind GeoLocation - reusable across all sites
{% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %} {% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %}
(country_block) { (country_allow) {
@allowed_local { @allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1 remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
} }
@@ -23,68 +23,170 @@
respond @not_allowed_countries "Access denied" 403 respond @not_allowed_countries "Access denied" 403
} }
{% else %} {% else %}
(country_block) { (country_allow) {
# Country blocking disabled # Country allow list disabled
} }
{% endif %} {% endif %}
{% if inventory_hostname == 'mennos-desktop' %} # European country allow list - allows all European countries only
{% if eu_countries_codes | default([]) | length > 0 %}
(eu_country_allow) {
@eu_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@eu_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ eu_countries_codes | join(' ') }}
}
}
}
respond @eu_not_allowed_countries "Access denied" 403
}
{% else %}
(eu_country_allow) {
# EU country allow list disabled
}
{% endif %}
# Trusted country allow list - allows US, Australia, New Zealand, and Japan
{% if trusted_countries_codes | default([]) | length > 0 %}
(trusted_country_allow) {
@trusted_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@trusted_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ trusted_countries_codes | join(' ') }}
}
}
}
respond @trusted_not_allowed_countries "Access denied" 403
}
{% else %}
(trusted_country_allow) {
# Trusted country allow list disabled
}
{% endif %}
# Sathub country allow list - combines EU and trusted countries
{% if eu_countries_codes | default([]) | length > 0 and trusted_countries_codes | default([]) | length > 0 %}
(sathub_country_allow) {
@sathub_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@sathub_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ (eu_countries_codes + trusted_countries_codes) | join(' ') }}
}
}
}
respond @sathub_not_allowed_countries "Access denied" 403
}
{% else %}
(sathub_country_allow) {
# Sathub country allow list disabled
}
{% endif %}
{% if inventory_hostname == 'mennos-server' %}
git.mvl.sh { git.mvl.sh {
import country_block import country_allow
reverse_proxy gitea:3000 reverse_proxy gitea:3000
tls {{ caddy_email }} tls {{ caddy_email }}
} }
git.vleeuwen.me { git.vleeuwen.me {
import country_block import country_allow
redir https://git.mvl.sh{uri} redir https://git.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
df.mvl.sh { df.mvl.sh {
import country_block import country_allow
redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.mvl.sh { fsm.mvl.sh {
import country_block import country_allow
reverse_proxy factorio-server-manager:80 reverse_proxy factorio-server-manager:80
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.vleeuwen.me { fsm.vleeuwen.me {
import country_block import country_allow
redir https://fsm.mvl.sh{uri} redir https://fsm.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.mvl.sh { beszel.mvl.sh {
import country_block import country_allow
reverse_proxy beszel:8090 reverse_proxy beszel:8090
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.vleeuwen.me { beszel.vleeuwen.me {
import country_block import country_allow
redir https://beszel.mvl.sh{uri} redir https://beszel.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
sathub.de {
import sathub_country_allow
handle {
reverse_proxy sathub-frontend:4173
}
# Enable compression
encode gzip
# Security headers
header {
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Strict-Transport-Security "max-age=31536000; includeSubDomains"
}
tls {{ caddy_email }}
}
api.sathub.de {
import sathub_country_allow
reverse_proxy sathub-backend:4001
tls {{ caddy_email }}
}
sathub.nl {
import sathub_country_allow
redir https://sathub.de{uri}
tls {{ caddy_email }}
}
photos.mvl.sh { photos.mvl.sh {
import country_block import country_allow
reverse_proxy immich:2283 reverse_proxy immich:2283
tls {{ caddy_email }} tls {{ caddy_email }}
} }
photos.vleeuwen.me { photos.vleeuwen.me {
import country_block import country_allow
redir https://photos.mvl.sh{uri} redir https://photos.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
home.mvl.sh { home.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -93,7 +195,7 @@ home.mvl.sh {
} }
home.vleeuwen.me { home.vleeuwen.me {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -127,13 +229,13 @@ hotspot.mvl.sh:80 {
} }
bin.mvl.sh { bin.mvl.sh {
import country_block import country_allow
reverse_proxy privatebin:8080 reverse_proxy privatebin:8080
tls {{ caddy_email }} tls {{ caddy_email }}
} }
ip.mvl.sh ip.vleeuwen.me { ip.mvl.sh ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
@@ -141,26 +243,26 @@ ip.mvl.sh ip.vleeuwen.me {
} }
http://ip.mvl.sh http://ip.vleeuwen.me { http://ip.mvl.sh http://ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
} }
overseerr.mvl.sh { overseerr.mvl.sh {
import country_block import country_allow
reverse_proxy overseerr:5055 reverse_proxy overseerr:5055
tls {{ caddy_email }} tls {{ caddy_email }}
} }
overseerr.vleeuwen.me { overseerr.vleeuwen.me {
import country_block import country_allow
redir https://overseerr.mvl.sh{uri} redir https://overseerr.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
plex.mvl.sh { plex.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:32400 { reverse_proxy host.docker.internal:32400 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -169,13 +271,13 @@ plex.mvl.sh {
} }
plex.vleeuwen.me { plex.vleeuwen.me {
import country_block import country_allow
redir https://plex.mvl.sh{uri} redir https://plex.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
tautulli.mvl.sh { tautulli.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8181 { reverse_proxy host.docker.internal:8181 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -184,13 +286,37 @@ tautulli.mvl.sh {
} }
tautulli.vleeuwen.me { tautulli.vleeuwen.me {
import country_block import country_allow
redir https://tautulli.mvl.sh{uri} redir https://tautulli.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
cloud.mvl.sh {
import country_allow
reverse_proxy cloudreve:5212 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
cloud.vleeuwen.me {
import country_allow
redir https://cloud.mvl.sh{uri}
tls {{ caddy_email }}
}
collabora.mvl.sh {
import country_allow
reverse_proxy collabora:9980 {
header_up Host {host}
header_up X-Real-IP {http.request.remote.host}
}
tls {{ caddy_email }}
}
drive.mvl.sh drive.vleeuwen.me { drive.mvl.sh drive.vleeuwen.me {
import country_block import country_allow
# CalDAV and CardDAV redirects # CalDAV and CardDAV redirects
redir /.well-known/carddav /remote.php/dav/ 301 redir /.well-known/carddav /remote.php/dav/ 301

View File

@@ -0,0 +1,32 @@
- name: Deploy Cloudreve service
tags:
- services
- cloudreve
block:
- name: Set Cloudreve directories
ansible.builtin.set_fact:
cloudreve_service_dir: "{{ ansible_env.HOME }}/.services/cloudreve"
cloudreve_data_dir: "/mnt/services/cloudreve"
- name: Create Cloudreve directory
ansible.builtin.file:
path: "{{ cloudreve_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Cloudreve docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ cloudreve_service_dir }}/docker-compose.yml"
mode: "0644"
register: cloudreve_compose
- name: Stop Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
when: cloudreve_compose.changed
- name: Start Cloudreve service
ansible.builtin.command: docker compose -f "{{ cloudreve_service_dir }}/docker-compose.yml" up -d
changed_when: false
when: cloudreve_compose.changed

View File

@@ -0,0 +1,67 @@
services:
cloudreve:
image: cloudreve/cloudreve:latest
depends_on:
- postgresql
- redis
restart: always
ports:
- 5212:5212
networks:
- caddy_network
- cloudreve
environment:
- CR_CONF_Database.Type=postgres
- CR_CONF_Database.Host=postgresql
- CR_CONF_Database.User=cloudreve
- CR_CONF_Database.Name=cloudreve
- CR_CONF_Database.Port=5432
- CR_CONF_Redis.Server=redis:6379
volumes:
- {{ cloudreve_data_dir }}/data:/cloudreve/data
postgresql:
image: postgres:17
environment:
- POSTGRES_USER=cloudreve
- POSTGRES_DB=cloudreve
- POSTGRES_HOST_AUTH_METHOD=trust
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/postgres:/var/lib/postgresql/data
collabora:
image: collabora/code
restart: unless-stopped
ports:
- 9980:9980
environment:
- domain=collabora\\.mvl\\.sh
- username=admin
- password=Dt3hgIJOPr3rgh
- dictionaries=en_US
- TZ=Europe/Amsterdam
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
networks:
- cloudreve
- caddy_network
deploy:
resources:
limits:
memory: 1G
redis:
image: redis:latest
networks:
- cloudreve
volumes:
- {{ cloudreve_data_dir }}/redis:/data
networks:
cloudreve:
name: cloudreve
driver: bridge
caddy_network:
name: caddy_default
external: true

View File

@@ -5,34 +5,34 @@ sections:
- name: Selfhosted - name: Selfhosted
items: items:
- title: Plex - title: Plex
icon: http://mennos-desktop:4000/assets/plex.svg icon: http://mennos-server:4000/assets/plex.svg
url: https://plex.mvl.sh url: https://plex.mvl.sh
statusCheckUrl: https://plex.mvl.sh/identity statusCheckUrl: https://plex.mvl.sh/identity
statusCheck: true statusCheck: true
id: 0_1035_plex id: 0_1035_plex
- title: Tautulli - title: Tautulli
icon: http://mennos-desktop:4000/assets/tautulli.svg icon: http://mennos-server:4000/assets/tautulli.svg
url: https://tautulli.mvl.sh url: https://tautulli.mvl.sh
id: 1_1035_tautulli id: 1_1035_tautulli
statusCheck: true statusCheck: true
- title: Overseerr - title: Overseerr
icon: http://mennos-desktop:4000/assets/overseerr.svg icon: http://mennos-server:4000/assets/overseerr.svg
url: https://overseerr.mvl.sh url: https://overseerr.mvl.sh
id: 2_1035_overseerr id: 2_1035_overseerr
statusCheck: true statusCheck: true
- title: Immich - title: Immich
icon: http://mennos-desktop:4000/assets/immich.svg icon: http://mennos-server:4000/assets/immich.svg
url: https://photos.mvl.sh url: https://photos.mvl.sh
id: 3_1035_immich id: 3_1035_immich
statusCheck: true statusCheck: true
- title: Nextcloud - title: Nextcloud
icon: http://mennos-desktop:4000/assets/nextcloud.svg icon: http://mennos-server:4000/assets/nextcloud.svg
url: https://drive.mvl.sh url: https://drive.mvl.sh
id: 3_1035_nxtcld id: 3_1035_nxtcld
statusCheck: true statusCheck: true
- title: ComfyUI - title: ComfyUI
icon: http://mennos-desktop:8188/assets/favicon.ico icon: http://mennos-server:8188/assets/favicon.ico
url: http://mennos-desktop:8188 url: http://mennos-server:8188
statusCheckUrl: http://host.docker.internal:8188/api/system_stats statusCheckUrl: http://host.docker.internal:8188/api/system_stats
id: 3_1035_comfyui id: 3_1035_comfyui
statusCheck: true statusCheck: true
@@ -45,19 +45,19 @@ sections:
- name: Media Management - name: Media Management
items: items:
- title: Sonarr - title: Sonarr
icon: http://mennos-desktop:4000/assets/sonarr.svg icon: http://mennos-server:4000/assets/sonarr.svg
url: http://go/sonarr url: http://go/sonarr
id: 0_1533_sonarr id: 0_1533_sonarr
- title: Radarr - title: Radarr
icon: http://mennos-desktop:4000/assets/radarr.svg icon: http://mennos-server:4000/assets/radarr.svg
url: http://go/radarr url: http://go/radarr
id: 1_1533_radarr id: 1_1533_radarr
- title: Prowlarr - title: Prowlarr
icon: http://mennos-desktop:4000/assets/prowlarr.svg icon: http://mennos-server:4000/assets/prowlarr.svg
url: http://go/prowlarr url: http://go/prowlarr
id: 2_1533_prowlarr id: 2_1533_prowlarr
- title: Tdarr - title: Tdarr
icon: http://mennos-desktop:4000/assets/tdarr.png icon: http://mennos-server:4000/assets/tdarr.png
url: http://go/tdarr url: http://go/tdarr
id: 3_1533_tdarr id: 3_1533_tdarr
- name: Kagi - name: Kagi
@@ -77,7 +77,7 @@ sections:
- name: News - name: News
items: items:
- title: Nu.nl - title: Nu.nl
icon: http://mennos-desktop:4000/assets/nunl.svg icon: http://mennos-server:4000/assets/nunl.svg
url: https://www.nu.nl/ url: https://www.nu.nl/
id: 0_380_nu id: 0_380_nu
- title: Tweakers.net - title: Tweakers.net
@@ -91,7 +91,7 @@ sections:
- name: Downloaders - name: Downloaders
items: items:
- title: qBittorrent - title: qBittorrent
icon: http://mennos-desktop:4000/assets/qbittorrent.svg icon: http://mennos-server:4000/assets/qbittorrent.svg
url: http://go/qbit url: http://go/qbit
id: 0_1154_qbittorrent id: 0_1154_qbittorrent
tags: tags:
@@ -99,7 +99,7 @@ sections:
- torrent - torrent
- yarr - yarr
- title: Sabnzbd - title: Sabnzbd
icon: http://mennos-desktop:4000/assets/sabnzbd.svg icon: http://mennos-server:4000/assets/sabnzbd.svg
url: http://go/sabnzbd url: http://go/sabnzbd
id: 1_1154_sabnzbd id: 1_1154_sabnzbd
tags: tags:
@@ -109,7 +109,7 @@ sections:
- name: Git - name: Git
items: items:
- title: GitHub - title: GitHub
icon: http://mennos-desktop:4000/assets/github.svg icon: http://mennos-server:4000/assets/github.svg
url: https://github.com/vleeuwenmenno url: https://github.com/vleeuwenmenno
id: 0_292_github id: 0_292_github
tags: tags:
@@ -117,7 +117,7 @@ sections:
- git - git
- hub - hub
- title: Gitea - title: Gitea
icon: http://mennos-desktop:4000/assets/gitea.svg icon: http://mennos-server:4000/assets/gitea.svg
url: http://git.mvl.sh/vleeuwenmenno url: http://git.mvl.sh/vleeuwenmenno
id: 1_292_gitea id: 1_292_gitea
tags: tags:
@@ -127,14 +127,14 @@ sections:
- name: Server Monitoring - name: Server Monitoring
items: items:
- title: Beszel - title: Beszel
icon: http://mennos-desktop:4000/assets/beszel.svg icon: http://mennos-server:4000/assets/beszel.svg
url: http://go/beszel url: http://go/beszel
tags: tags:
- monitoring - monitoring
- logs - logs
id: 0_1725_beszel id: 0_1725_beszel
- title: Dozzle - title: Dozzle
icon: http://mennos-desktop:4000/assets/dozzle.svg icon: http://mennos-server:4000/assets/dozzle.svg
url: http://go/dozzle url: http://go/dozzle
id: 1_1725_dozzle id: 1_1725_dozzle
tags: tags:
@@ -150,19 +150,19 @@ sections:
- name: Tools - name: Tools
items: items:
- title: Home Assistant - title: Home Assistant
icon: http://mennos-desktop:4000/assets/home-assistant.svg icon: http://mennos-server:4000/assets/home-assistant.svg
url: http://go/homeassistant url: http://go/homeassistant
id: 0_529_homeassistant id: 0_529_homeassistant
- title: Tailscale - title: Tailscale
icon: http://mennos-desktop:4000/assets/tailscale.svg icon: http://mennos-server:4000/assets/tailscale.svg
url: http://go/tailscale url: http://go/tailscale
id: 1_529_tailscale id: 1_529_tailscale
- title: GliNet KVM - title: GliNet KVM
icon: http://mennos-desktop:4000/assets/glinet.svg icon: http://mennos-server:4000/assets/glinet.svg
url: http://go/glkvm url: http://go/glkvm
id: 2_529_glinetkvm id: 2_529_glinetkvm
- title: Unifi Network Controller - title: Unifi Network Controller
icon: http://mennos-desktop:4000/assets/unifi.svg icon: http://mennos-server:4000/assets/unifi.svg
url: http://go/unifi url: http://go/unifi
id: 3_529_unifinetworkcontroller id: 3_529_unifinetworkcontroller
- title: Dashboard Icons - title: Dashboard Icons
@@ -236,7 +236,7 @@ sections:
- discount - discount
- work - work
- title: Proxmox - title: Proxmox
icon: http://mennos-desktop:4000/assets/proxmox.svg icon: http://mennos-server:4000/assets/proxmox.svg
url: https://www.transip.nl/cp/vps/prm/350680/ url: https://www.transip.nl/cp/vps/prm/350680/
id: 5_1429_proxmox id: 5_1429_proxmox
tags: tags:
@@ -252,29 +252,13 @@ sections:
- discount - discount
- work - work
- title: Kibana - title: Kibana
icon: http://mennos-desktop:4000/assets/kibana.svg icon: http://mennos-server:4000/assets/kibana.svg
url: http://go/kibana url: http://go/kibana
id: 7_1429_kibana id: 7_1429_kibana
tags: tags:
- do - do
- discount - discount
- work - work
- name: Other
items:
- title: Whisparr
icon: http://mennos-desktop:4000/assets/whisparr.svg
url: http://go/whisparr
id: 0_514_whisparr
- title: Stash
icon: http://mennos-desktop:4000/assets/stash.svg
url: http://go/stash
id: 1_514_stash
displayData:
sortBy: default
rows: 1
cols: 1
collapsed: true
hideForGuests: true
appConfig: appConfig:
layout: auto layout: auto
iconSize: large iconSize: large

View File

@@ -0,0 +1,15 @@
services:
necesse:
image: brammys/necesse-server
container_name: necesse
restart: unless-stopped
ports:
- "14159:14159/udp"
environment:
- MOTD=StarDebris' Server!
- PASSWORD=2142
- SLOTS=4
- PAUSE=1
volumes:
- {{ necesse_data_dir }}/saves:/necesse/saves
- {{ necesse_data_dir }}/logs:/necesse/logs

View File

@@ -0,0 +1,41 @@
---
- name: Deploy Necesse service
block:
- name: Set Necesse directories
ansible.builtin.set_fact:
necesse_service_dir: "{{ ansible_env.HOME }}/.services/necesse"
necesse_data_dir: "/mnt/services/necesse"
- name: Create Necesse service directory
ansible.builtin.file:
path: "{{ necesse_service_dir }}"
state: directory
mode: "0755"
- name: Create Necesse data directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ necesse_data_dir }}"
- "{{ necesse_data_dir }}/saves"
- "{{ necesse_data_dir }}/logs"
- name: Deploy Necesse docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ necesse_service_dir }}/docker-compose.yml"
mode: "0644"
register: necesse_compose
- name: Stop Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" down --remove-orphans
when: necesse_compose.changed
- name: Start Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" up -d
when: necesse_compose.changed
tags:
- services
- necesse

View File

@@ -0,0 +1,17 @@
services:
qdrant:
image: qdrant/qdrant:latest
restart: always
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
volumes:
- /mnt/services/qdrant:/qdrant/storage
deploy:
resources:
limits:
memory: 2G

View File

@@ -0,0 +1,32 @@
- name: Deploy Qdrant service
tags:
- services
- qdrant
block:
- name: Set Qdrant directories
ansible.builtin.set_fact:
qdrant_service_dir: "{{ ansible_env.HOME }}/.services/qdrant"
qdrant_data_dir: "/mnt/services/qdrant"
- name: Create Qdrant directory
ansible.builtin.file:
path: "{{ qdrant_service_dir }}"
state: directory
mode: "0755"
- name: Deploy Qdrant docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ qdrant_service_dir }}/docker-compose.yml"
mode: "0644"
notify: restart_qdrant
- name: Stop Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" down --remove-orphans
changed_when: false
listen: restart_qdrant
- name: Start Qdrant service
ansible.builtin.command: docker compose -f "{{ qdrant_service_dir }}/docker-compose.yml" up -d
changed_when: false
listen: restart_qdrant

View File

@@ -34,6 +34,7 @@
register: juicefs_stop register: juicefs_stop
changed_when: juicefs_stop.changed changed_when: juicefs_stop.changed
when: redis_compose.changed and juicefs_service_stat.stat.exists when: redis_compose.changed and juicefs_service_stat.stat.exists
become: true
- name: List containers that are running - name: List containers that are running
ansible.builtin.command: docker ps -q ansible.builtin.command: docker ps -q
@@ -68,6 +69,7 @@
register: juicefs_start register: juicefs_start
changed_when: juicefs_start.changed changed_when: juicefs_start.changed
when: juicefs_service_stat.stat.exists when: juicefs_service_stat.stat.exists
become: true
- name: Restart containers that were stopped - name: Restart containers that were stopped
ansible.builtin.command: docker start {{ item }} ansible.builtin.command: docker start {{ item }}

View File

@@ -0,0 +1,53 @@
# Production Environment Variables
# Copy this to .env and fill in your values
# Database configuration (PostgreSQL)
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USER=sathub
DB_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DB_PASSWORD') }}
DB_NAME=sathub
# Required: JWT secret for token signing
JWT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='JWT_SECRET') }}
# Required: Two-factor authentication encryption key
TWO_FA_ENCRYPTION_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='TWO_FA_ENCRYPTION_KEY') }}
# Email configuration (required for password resets)
SMTP_HOST={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_HOST') }}
SMTP_PORT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PORT') }}
SMTP_USERNAME={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_USERNAME') }}
SMTP_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_PASSWORD') }}
SMTP_FROM_EMAIL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='SMTP_FROM_EMAIL') }}
# MinIO Object Storage configuration
MINIO_ROOT_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_ROOT_PASSWORD={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# Basically the same as the above
MINIO_ACCESS_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_USER') }}
MINIO_SECRET_KEY={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='MINIO_ROOT_PASSWORD') }}
# GitHub credentials for Watchtower (auto-updates)
GITHUB_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
GITHUB_PAT={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
REPO_USER={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_USER') }}
REPO_PASS={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='GITHUB_PAT') }}
# Optional: Override defaults if needed
# GIN_MODE=release (set automatically)
FRONTEND_URL=https://sathub.de
# CORS configuration (optional - additional allowed origins)
CORS_ALLOWED_ORIGINS=https://sathub.de,https://sathub.nl,https://api.sathub.de
# Frontend configuration (optional - defaults are provided)
VITE_API_BASE_URL=https://api.sathub.de
VITE_ALLOWED_HOSTS=sathub.de,sathub.nl
# Discord related messsaging
DISCORD_CLIENT_ID={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_ID') }}
DISCORD_CLIENT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_SECRET') }}
DISCORD_REDIRECT_URI={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_REDIRECT_URL') }}
DISCORD_WEBHOOK_URL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_WEBHOOK_URL') }}

View File

@@ -0,0 +1,182 @@
services:
# Migration service - runs once on stack startup
migrate:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-migrate
restart: "no"
command: ["./main", "auto-migrate"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
networks:
- sathub
depends_on:
- postgres
backend:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-backend
restart: unless-stopped
command: ["./main", "api"]
environment:
- GIN_MODE=release
- FRONTEND_URL=${FRONTEND_URL:-https://sathub.de}
- CORS_ALLOWED_ORIGINS=${CORS_ALLOWED_ORIGINS:-https://sathub.de}
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# Security settings
- JWT_SECRET=${JWT_SECRET}
- TWO_FA_ENCRYPTION_KEY=${TWO_FA_ENCRYPTION_KEY}
# SMTP settings
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
- caddy_network
depends_on:
migrate:
condition: service_completed_successfully
worker:
image: ghcr.io/vleeuwenmenno/sathub-backend/backend:latest
container_name: sathub-worker
restart: unless-stopped
command: ["./main", "worker"]
environment:
- GIN_MODE=release
# Database settings
- DB_TYPE=postgres
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=${DB_USER:-sathub}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME:-sathub}
# SMTP settings (needed for notifications)
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_FROM_EMAIL=${SMTP_FROM_EMAIL}
# MinIO settings
- MINIO_ENDPOINT=http://minio:9000
- MINIO_BUCKET=sathub-images
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks:
- sathub
depends_on:
migrate:
condition: service_completed_successfully
postgres:
image: postgres:15-alpine
container_name: sathub-postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${DB_USER:-sathub}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-sathub}
volumes:
- {{ sathub_data_dir }}/postgres_data:/var/lib/postgresql/data
networks:
- sathub
frontend:
image: ghcr.io/vleeuwenmenno/sathub-frontend/frontend:latest
container_name: sathub-frontend
restart: unless-stopped
environment:
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-https://api.sathub.de}
- VITE_ALLOWED_HOSTS=${VITE_ALLOWED_HOSTS:-sathub.de,sathub.nl}
networks:
- sathub
- caddy_network
minio:
image: minio/minio
container_name: sathub-minio
restart: unless-stopped
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
volumes:
- {{ sathub_data_dir }}/minio_data:/data
command: server /data --console-address :9001
networks:
- sathub
depends_on:
- postgres
watchtower:
image: containrrr/watchtower:latest
container_name: sathub-watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_STOPPED=false
- REPO_USER=${REPO_USER}
- REPO_PASS=${REPO_PASS}
command: --interval 30 --cleanup --include-stopped=false sathub-backend sathub-worker sathub-frontend
networks:
- sathub
networks:
sathub:
driver: bridge
# We assume you're running a Caddy instance in a separate compose file with this network
# If not, you can remove this network and the related depends_on in the services above
# But the stack is designed to run behind a Caddy reverse proxy for SSL termination and routing
caddy_network:
external: true
name: caddy_default

View File

@@ -0,0 +1,50 @@
---
- name: Deploy SatHub service
block:
- name: Set SatHub directories
ansible.builtin.set_fact:
sathub_service_dir: "{{ ansible_env.HOME }}/.services/sathub"
sathub_data_dir: "/mnt/services/sathub"
- name: Set SatHub frontend configuration
ansible.builtin.set_fact:
frontend_api_base_url: "https://api.sathub.de"
frontend_allowed_hosts: "sathub.de,sathub.nl"
cors_allowed_origins: "https://sathub.nl,https://api.sathub.de,https://obj.sathub.de"
- name: Create SatHub directory
ansible.builtin.file:
path: "{{ sathub_service_dir }}"
state: directory
mode: "0755"
- name: Create SatHub data directory
ansible.builtin.file:
path: "{{ sathub_data_dir }}"
state: directory
mode: "0755"
- name: Deploy SatHub .env
ansible.builtin.template:
src: .env.j2
dest: "{{ sathub_service_dir }}/.env"
mode: "0644"
register: sathub_env
- name: Deploy SatHub docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ sathub_service_dir }}/docker-compose.yml"
mode: "0644"
register: sathub_compose
- name: Stop SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" down --remove-orphans
when: sathub_compose.changed or sathub_env.changed
- name: Start SatHub service
ansible.builtin.command: docker compose -f "{{ sathub_service_dir }}/docker-compose.yml" up -d
when: sathub_compose.changed or sathub_env.changed
tags:
- services
- sathub

View File

@@ -1,37 +0,0 @@
---
- name: Deploy Stash service
block:
- name: Set Stash directories
ansible.builtin.set_fact:
stash_data_dir: "/mnt/data/stash"
stash_config_dir: "/mnt/services/stash"
stash_service_dir: "{{ ansible_env.HOME }}/.services/stash"
- name: Create Stash directories
ansible.builtin.file:
path: "{{ stash_dir }}"
state: directory
mode: "0755"
loop:
- "{{ stash_data_dir }}"
- "{{ stash_service_dir }}"
loop_control:
loop_var: stash_dir
- name: Deploy Stash docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ stash_service_dir }}/docker-compose.yml"
mode: "0644"
register: stash_compose
- name: Stop Stash service
ansible.builtin.command: docker compose -f "{{ stash_service_dir }}/docker-compose.yml" down --remove-orphans
when: stash_compose.changed
- name: Start Stash service
ansible.builtin.command: docker compose -f "{{ stash_service_dir }}/docker-compose.yml" up -d
when: stash_compose.changed
tags:
- services
- stash

View File

@@ -31,11 +31,6 @@
- name: Define system desired Flatpaks - name: Define system desired Flatpaks
ansible.builtin.set_fact: ansible.builtin.set_fact:
desired_system_flatpaks: desired_system_flatpaks:
# GNOME Software
- "{{ 'org.gnome.Extensions' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Weather' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
- "{{ 'org.gnome.Sudoku' if (ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP) else omit }}"
# Games # Games
- io.github.openhv.OpenHV - io.github.openhv.OpenHV
- info.beyondallreason.bar - info.beyondallreason.bar
@@ -46,22 +41,20 @@
# Multimedia # Multimedia
- com.plexamp.Plexamp - com.plexamp.Plexamp
- tv.plex.PlexDesktop - tv.plex.PlexDesktop
- com.spotify.Client
# Messaging # Messaging
- com.rtosta.zapzap - com.rtosta.zapzap
- org.telegram.desktop - org.telegram.desktop
- org.signal.Signal - org.signal.Signal
- com.spotify.Client - com.discordapp.Discord
# Nextcloud Compatible Utilities
- io.github.mrvladus.List
- org.gnome.World.Iotas
# 3D Printing # 3D Printing
- com.bambulab.BambuStudio - com.bambulab.BambuStudio
- io.mango3d.LycheeSlicer - io.mango3d.LycheeSlicer
# Utilities # Utilities
- com.fastmail.Fastmail
- com.ranfdev.DistroShelf - com.ranfdev.DistroShelf
- io.missioncenter.MissionCenter - io.missioncenter.MissionCenter
- io.gitlab.elescoute.spacelaunch - io.gitlab.elescoute.spacelaunch
@@ -81,6 +74,8 @@
- io.github.flattool.Ignition - io.github.flattool.Ignition
- io.github.bytezz.IPLookup - io.github.bytezz.IPLookup
- org.gaphor.Gaphor - org.gaphor.Gaphor
- io.dbeaver.DBeaverCommunity
- com.jetpackduba.Gitnuro
- name: Define system desired Flatpak remotes - name: Define system desired Flatpak remotes
ansible.builtin.set_fact: ansible.builtin.set_fact:

View File

@@ -1,18 +0,0 @@
---
- name: Install Pano - Clipboard Manager dependencies
ansible.builtin.apt:
name:
- gir1.2-gda-5.0
- gir1.2-gsound-1.0
state: present
update_cache: true
become: true
- name: Install Pano - Clipboard Manager
ansible.builtin.import_tasks: tasks/workstations/gnome-extensions/pano.yml
- name: Install Tiling Shell - Window Manager
ansible.builtin.import_tasks: tasks/workstations/gnome-extensions/tilingshell.yml
- name: Install Quick Settings Tweaks
ansible.builtin.import_tasks: tasks/workstations/gnome-extensions/quick-settings.yml

View File

@@ -1,73 +0,0 @@
---
- name: Manage GNOME extension
vars:
requested_git_tag: "{{ git_tag }}"
extension_name: "{{ ext_name }}"
extension_url: "{{ ext_url }}"
extension_path: "{{ ansible_user_dir }}/.local/share/gnome-shell/extensions/{{ ext_id }}"
version_file: "{{ extension_path }}/version.txt"
block:
- name: Check if extension is installed
ansible.builtin.stat:
path: "{{ extension_path }}"
register: ext_check
- name: Read last installed version
ansible.builtin.slurp:
src: "{{ version_file }}"
register: installed_version
ignore_errors: true
when: ext_check.stat.exists
- name: Determine if update is needed
ansible.builtin.set_fact:
update_needed: >-
{{ installed_version.content is not defined or
(installed_version.content | b64decode | trim != requested_git_tag) }}
- name: Delete old extension if updating
ansible.builtin.file:
path: "{{ extension_path }}"
state: absent
when: update_needed
- name: Create directory for extension
ansible.builtin.file:
path: "{{ extension_path }}"
state: directory
mode: "0755"
when: not ext_check.stat.exists or update_needed
- name: Download extension
ansible.builtin.get_url:
url: "{{ extension_url | replace('%TAG%', requested_git_tag) }}"
dest: "{{ extension_path }}/release.zip"
mode: "0644"
when: update_needed or not ext_check.stat.exists
- name: Extract extension
ansible.builtin.unarchive:
src: "{{ extension_path }}/release.zip"
dest: "{{ extension_path }}"
when: update_needed or not ext_check.stat.exists
- name: Store installed version of the extension
ansible.builtin.copy:
content: "{{ requested_git_tag }}"
dest: "{{ version_file }}"
mode: "0644"
when: update_needed or not ext_check.stat.exists
- name: Cleanup post installation
ansible.builtin.file:
path: "{{ extension_path }}/release.zip"
state: absent
when: not ext_check.stat.exists or update_needed
- name: Notify user of required GNOME Shell reload
ansible.builtin.debug:
msg: >
Please reload GNOME Shell by pressing Alt + F2, typing 'r' and pressing Enter.
Then enable the {{ extension_name }} in GNOME Tweaks.
Or on Wayland, log out and back in.
when: not ext_check.stat.exists or update_needed

View File

@@ -1,8 +0,0 @@
---
- name: Manage Pano Clipboard Manager
ansible.builtin.include_tasks: tasks/workstations/gnome-extensions/manage_gnome_extension.yml
vars:
git_tag: "v23-alpha5"
ext_name: "Pano - Clipboard Manager"
ext_url: "https://github.com/oae/gnome-shell-pano/releases/download/%TAG%/pano@elhan.io.zip"
ext_id: "pano@elhan.io"

View File

@@ -1,8 +0,0 @@
---
- name: Manage Quick Settings Tweaks
ansible.builtin.include_tasks: tasks/workstations/gnome-extensions/manage_gnome_extension.yml
vars:
git_tag: "2.1-stable"
ext_name: "Quick Settings Tweaks"
ext_url: "https://github.com/qwreey/quick-settings-tweaks/releases/download/2.1-stable/2.1-release.zip"
ext_id: "quick-settings-tweaks@qwreey"

View File

@@ -1,8 +0,0 @@
---
- name: Manage Tiling Shell - Window Manager
ansible.builtin.include_tasks: tasks/workstations/gnome-extensions/manage_gnome_extension.yml
vars:
git_tag: "16.3"
ext_name: "Tiling Shell - Window Manager"
ext_url: "https://github.com/domferr/tilingshell/releases/download/%TAG%/tilingshell@ferrarodomenico.com.zip"
ext_id: "tilingshell@ferrarodomenico.com"

View File

@@ -6,14 +6,6 @@
- name: Define workstation symlinks - name: Define workstation symlinks
ansible.builtin.set_fact: ansible.builtin.set_fact:
workstation_symlinks: workstation_symlinks:
- {
src: "$DOTFILES_PATH/vscode/settings.json",
dest: "~/.config/Code/User/settings.json",
}
- {
src: "$DOTFILES_PATH/zed/settings.json",
dest: "~/.config/zed/settings.json",
}
- { src: "$DOTFILES_PATH/config/autostart", dest: "~/.config/autostart" } - { src: "$DOTFILES_PATH/config/autostart", dest: "~/.config/autostart" }
- name: Ensure parent directories for workstation symlinks exist - name: Ensure parent directories for workstation symlinks exist

View File

@@ -0,0 +1,175 @@
---
- name: Install Vicinae
block:
- name: Set Vicinae version
ansible.builtin.set_fact:
vicinae_version: "v0.15.6"
vicinae_appimage_commit: "13865b4c5"
- name: Set architecture-specific variables
ansible.builtin.set_fact:
vicinae_arch: "{{ 'x86_64' if ansible_architecture == 'x86_64' else ansible_architecture }}"
- name: Ensure /opt/vicinae directory exists
ansible.builtin.file:
path: "/opt/vicinae"
state: directory
mode: "0755"
become: true
- name: Download Vicinae AppImage
ansible.builtin.get_url:
url: "https://github.com/vicinaehq/vicinae/releases/download/{{ vicinae_version }}/Vicinae-{{ vicinae_appimage_commit }}-{{ vicinae_arch }}.AppImage"
dest: "/opt/vicinae/vicinae.AppImage"
mode: "0755"
become: true
- name: Remove old Vicinae binary if exists
ansible.builtin.file:
path: "/usr/local/bin/vicinae"
state: absent
become: true
- name: Create symlink to Vicinae AppImage
ansible.builtin.file:
src: "/opt/vicinae/vicinae.AppImage"
dest: "/usr/local/bin/vicinae"
state: link
become: true
- name: Create temporary directory for Vicinae assets download
ansible.builtin.tempfile:
state: directory
suffix: vicinae
register: vicinae_temp_dir
- name: Download Vicinae tarball for assets
ansible.builtin.get_url:
url: "https://github.com/vicinaehq/vicinae/releases/download/{{ vicinae_version }}/vicinae-linux-{{ vicinae_arch }}-{{ vicinae_version }}.tar.gz"
dest: "{{ vicinae_temp_dir.path }}/vicinae.tar.gz"
mode: "0644"
- name: Extract Vicinae tarball
ansible.builtin.unarchive:
src: "{{ vicinae_temp_dir.path }}/vicinae.tar.gz"
dest: "{{ vicinae_temp_dir.path }}"
remote_src: true
- name: Ensure systemd user directory exists
ansible.builtin.file:
path: "/usr/lib/systemd/user"
state: directory
mode: "0755"
become: true
- name: Copy systemd user service
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/lib/systemd/user/vicinae.service"
dest: "/usr/lib/systemd/user/vicinae.service"
mode: "0644"
remote_src: true
become: true
- name: Update systemd service to use AppImage
ansible.builtin.replace:
path: "/usr/lib/systemd/user/vicinae.service"
regexp: "ExecStart=.*"
replace: "ExecStart=/usr/local/bin/vicinae"
become: true
- name: Ensure applications directory exists
ansible.builtin.file:
path: "/usr/share/applications"
state: directory
mode: "0755"
become: true
- name: Copy desktop files
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/applications/{{ item }}"
dest: "/usr/share/applications/{{ item }}"
mode: "0644"
remote_src: true
become: true
loop:
- vicinae.desktop
- vicinae-url-handler.desktop
- name: Update desktop files to use AppImage
ansible.builtin.replace:
path: "/usr/share/applications/{{ item }}"
regexp: "Exec=.*vicinae"
replace: "Exec=/usr/local/bin/vicinae"
become: true
loop:
- vicinae.desktop
- vicinae-url-handler.desktop
- name: Ensure Vicinae share directory exists
ansible.builtin.file:
path: "/usr/share/vicinae"
state: directory
mode: "0755"
become: true
- name: Copy Vicinae themes directory
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/vicinae/themes/"
dest: "/usr/share/vicinae/themes/"
mode: "0644"
remote_src: true
become: true
- name: Ensure hicolor icons directory exists
ansible.builtin.file:
path: "/usr/share/icons/hicolor/512x512/apps"
state: directory
mode: "0755"
become: true
- name: Copy Vicinae icon
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/icons/hicolor/512x512/apps/vicinae.png"
dest: "/usr/share/icons/hicolor/512x512/apps/vicinae.png"
mode: "0644"
remote_src: true
become: true
- name: Update desktop database
ansible.builtin.command:
cmd: update-desktop-database /usr/share/applications
become: true
changed_when: false
- name: Update icon cache
ansible.builtin.command:
cmd: gtk-update-icon-cache /usr/share/icons/hicolor
become: true
changed_when: false
failed_when: false
- name: Clean up temporary directory
ansible.builtin.file:
path: "{{ vicinae_temp_dir.path }}"
state: absent
- name: Verify Vicinae installation
ansible.builtin.command:
cmd: /usr/local/bin/vicinae --version
register: vicinae_version_check
changed_when: false
failed_when: false
- name: Display installation result
ansible.builtin.debug:
msg: |
{% if vicinae_version_check.rc == 0 %}
✓ Vicinae AppImage installed successfully with all themes and assets!
Version: {{ vicinae_version_check.stdout }}
{% else %}
✗ Vicinae installation completed but version check failed.
This may be normal if --version flag is not supported.
Try running: vicinae
{% endif %}
tags:
- vicinae

View File

@@ -4,14 +4,13 @@
- name: Include workstation symlinks tasks - name: Include workstation symlinks tasks
ansible.builtin.import_tasks: tasks/workstations/symlinks.yml ansible.builtin.import_tasks: tasks/workstations/symlinks.yml
- name: Include Zed configuration tasks
ansible.builtin.import_tasks: tasks/workstations/zed.yml
- name: Include workstation cliphist tasks - name: Include workstation cliphist tasks
ansible.builtin.import_tasks: tasks/workstations/cliphist.yml ansible.builtin.import_tasks: tasks/workstations/cliphist.yml
when: "'microsoft-standard-WSL2' not in ansible_kernel" when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include GNOME Extensions tasks
ansible.builtin.import_tasks: tasks/workstations/gnome-extensions.yml
when: ansible_facts.env.XDG_CURRENT_DESKTOP is defined and 'GNOME' in ansible_facts.env.XDG_CURRENT_DESKTOP and 'microsoft-standard-WSL2' not in ansible_kernel
- name: Include Firefox APT installation tasks - name: Include Firefox APT installation tasks
ansible.builtin.import_tasks: tasks/workstations/firefox-apt.yml ansible.builtin.import_tasks: tasks/workstations/firefox-apt.yml
when: ansible_pkg_mgr == 'apt' and ansible_facts.packages.snapd is defined and 'microsoft-standard-WSL2' not in ansible_kernel when: ansible_pkg_mgr == 'apt' and ansible_facts.packages.snapd is defined and 'microsoft-standard-WSL2' not in ansible_kernel
@@ -43,6 +42,10 @@
ansible.builtin.import_tasks: tasks/workstations/autostart.yml ansible.builtin.import_tasks: tasks/workstations/autostart.yml
when: "'microsoft-standard-WSL2' not in ansible_kernel" when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include Vicinae tasks
ansible.builtin.import_tasks: tasks/workstations/vicinae.yml
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Ensure workstation common packages are installed - name: Ensure workstation common packages are installed
ansible.builtin.package: ansible.builtin.package:
name: name:

View File

@@ -0,0 +1,20 @@
---
- name: Zed Configuration
block:
- name: Set user home directory
ansible.builtin.set_fact:
user_home: "{{ ansible_env.HOME if ansible_user_id == 'root' else lookup('env', 'HOME') }}"
- name: Ensure Zed config directory exists
ansible.builtin.file:
path: "{{ user_home }}/.config/zed"
state: directory
mode: "0755"
- name: Template Zed settings with 1Password secrets
ansible.builtin.template:
src: zed.jsonc
dest: "{{ user_home }}/.config/zed/settings.json"
mode: "0644"
tags:
- zed

View File

@@ -5,7 +5,7 @@ Before=docker.service
[Service] [Service]
Type=simple Type=simple
ExecStart=/usr/local/bin/juicefs mount redis://:{{ redis_password }}@mennos-desktop:6379/0 /mnt/object_storage \ ExecStart=/usr/local/bin/juicefs mount redis://:{{ redis_password }}@mennos-server:6379/0 /mnt/object_storage \
--cache-dir=/var/jfsCache \ --cache-dir=/var/jfsCache \
--buffer-size=4096 \ --buffer-size=4096 \
--prefetch=16 \ --prefetch=16 \

202
ansible/templates/zed.jsonc Normal file
View File

@@ -0,0 +1,202 @@
// Zed settings
//
// For information on how to configure Zed, see the Zed
// documentation: https://zed.dev/docs/configuring-zed
//
// To see all of Zed's default settings without changing your
// custom settings, run `zed: open default settings` from the
// command palette (cmd-shift-p / ctrl-shift-p)
{
// #############################################
// ## Theming ##
// #############################################
"formatter": "prettier",
"context_servers": {
"mcp-server-context7": {
"source": "extension",
"enabled": true,
"settings": {
"context7_api_key": "{{ lookup('community.general.onepassword', 'Zed Settings', vault='Dotfiles', field='mcp-server-context7') }}",
},
},
"memory": {
"source": "custom",
"enabled": true,
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"env": {
"MEMORY_FILE_PATH": "${input:memory_file_path}",
},
},
},
"features": {
"edit_prediction_provider": "copilot",
},
"telemetry": {
"diagnostics": false,
"metrics": false,
},
"ssh_connections": [
{
"host": "desktop",
"projects": [
{
"paths": ["/home/menno/.dotfiles"],
},
],
},
{
"host": "salt.dev-via-laptop",
"username": "salt",
"projects": [
{
"paths": ["/home/salt/releases"],
},
],
},
],
"icon_theme": {
"mode": "system",
"light": "VSCode Icons (Dark)",
"dark": "VSCode Icons (Dark)",
},
"ui_font_size": 16,
"buffer_font_size": 14,
"terminal": {
"font_size": 14,
},
"minimap": {
"show": "always",
"thumb": "hover",
"current_line_highlight": "all",
"display_in": "active_editor",
},
"theme": {
"mode": "system",
"light": "One Light",
"dark": "VSCode Dark Modern",
},
"tabs": {
"close_position": "right",
"file_icons": true,
"git_status": true,
"activate_on_close": "history",
"show_close_button": "hover",
"show_diagnostics": "errors",
},
"toolbar": {
"code_actions": true,
},
// #############################################
// ## Preferences ##
// #############################################
"restore_on_startup": "last_session",
"auto_update": true,
"base_keymap": "VSCode",
"cursor_shape": "bar",
"hide_mouse": "on_typing",
"on_last_window_closed": "quit_app",
"ensure_final_newline_on_save": true,
"format_on_save": "on",
"tab_size": 2,
"inlay_hints": {
"enabled": true,
"show_parameter_hints": true,
},
// #############################################
// ## AI Stuff ##
// #############################################
"agent": {
"profiles": {
"ask": {
"name": "Ask",
"tools": {
"contents": true,
"diagnostics": true,
"fetch": true,
"list_directory": true,
"project_notifications": false,
"now": true,
"find_path": true,
"read_file": true,
"open": true,
"grep": true,
"thinking": true,
"web_search": true,
},
"enable_all_context_servers": false,
"context_servers": {
"memory": {
"tools": {
"search_nodes": true,
"read_graph": true,
"open_nodes": true,
"delete_relations": true,
"delete_observations": true,
"delete_entities": true,
"create_relations": true,
"create_entities": true,
"add_observations": true,
},
},
"mcp-server-context7": {
"tools": {
"resolve-library-id": true,
"get-library-docs": true,
},
},
},
},
},
"always_allow_tool_actions": true,
"default_profile": "write",
"model_parameters": [],
"default_model": {
"provider": "copilot_chat",
"model": "grok-code-fast-1",
},
},
"edit_predictions": {
"mode": "subtle",
"enabled_in_text_threads": true,
"disabled_globs": [
"**/.env*",
"**/*.pem",
"**/*.key",
"**/*.cert",
"**/*.crt",
"**/.dev.vars",
"**/secrets/**",
],
},
// #############################################
// ## Extensions ##
// #############################################
"auto_install_extensions": {
"dockerfile": true,
"html": true,
"yaml": true,
"docker-compose": true,
"golang": true,
},
// #############################################
// ## Languages ##
// #############################################
"languages": {
"PHP": {
"language_servers": ["phptools"],
},
"Dart": {
"code_actions_on_format": {
"source.organizeImports": true,
},
},
},
"lsp": {
"phptools": {
"initialization_options": {
"0": "<YOUR LICENSE KEY>",
},
},
},
}

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
# Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin"))
from helpers.functions import printfe, run_command
def check_command_exists(command):
"""Check if a command is available in the system"""
try:
subprocess.run(
["which", command],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
return True
except subprocess.CalledProcessError:
return False
def list_screen_sessions():
"""List all screen sessions"""
success, output = run_command(["screen", "-ls"])
return output
def wipe_dead_sessions():
"""Check and clean up dead screen sessions"""
screen_list = list_screen_sessions()
if "Dead" in screen_list:
print("Found dead sessions, cleaning up...")
run_command(["screen", "-wipe"])
def is_app_running(app_name):
"""Check if an app is already running in a screen session"""
screen_list = list_screen_sessions()
return app_name in screen_list
def start_app(app_name, command):
"""Start an application in a screen session"""
printfe("green", f"Starting {app_name} with command: {command}...")
run_command(["screen", "-dmS", app_name] + command.split())
time.sleep(1) # Give it a moment to start
def main():
# Define dictionary with app_name => command mapping
apps = {
"vesktop": "vesktop",
"ktailctl": "flatpak run org.fkoehler.KTailctl",
"nemo-desktop": "nemo-desktop",
}
# Clean up dead sessions if any
wipe_dead_sessions()
print("Starting auto-start applications...")
for app_name, command in apps.items():
# Get the binary name (first part of the command)
command_binary = command.split()[0]
# Check if the command exists
if check_command_exists(command_binary):
# Check if the app is already running
if is_app_running(app_name):
printfe("yellow", f"{app_name} is already running. Skipping...")
continue
# Start the application
start_app(app_name, command)
# Display screen sessions
print(list_screen_sessions())
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,13 +1,14 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Display welcome message and system information."""
import os import os
import sys import sys
import subprocess import subprocess
from datetime import datetime
# Import helper functions # Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin")) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from helpers.functions import printfe, logo, _rainbow_color, COLORS from helpers.functions import logo, _rainbow_color, COLORS
def get_last_ssh_login(): def get_last_ssh_login():
@@ -17,12 +18,16 @@ def get_last_ssh_login():
["lastlog", "-u", os.environ.get("USER", "")], ["lastlog", "-u", os.environ.get("USER", "")],
capture_output=True, capture_output=True,
text=True, text=True,
check=False,
) )
# If lastlog didn't work try lastlog2 # If lastlog didn't work try lastlog2
if result.returncode != 0: if result.returncode != 0:
result = subprocess.run( result = subprocess.run(
["lastlog2", os.environ.get("USER", "")], capture_output=True, text=True ["lastlog2", os.environ.get("USER", "")],
capture_output=True,
text=True,
check=False,
) )
if result.returncode == 0: if result.returncode == 0:
@@ -38,9 +43,7 @@ def get_last_ssh_login():
time_str = " ".join(parts[3:]) time_str = " ".join(parts[3:])
return f"{COLORS['cyan']}Last SSH login{COLORS['reset']}{COLORS['yellow']} {time_str}{COLORS['cyan']} from{COLORS['yellow']} {ip}" return f"{COLORS['cyan']}Last SSH login{COLORS['reset']}{COLORS['yellow']} {time_str}{COLORS['cyan']} from{COLORS['yellow']} {ip}"
return None return None
except Exception as e: except (subprocess.CalledProcessError, FileNotFoundError):
# For debugging, you might want to print the exception
# print(f"Error getting SSH login: {str(e)}")
return None return None
@@ -67,6 +70,7 @@ def check_dotfiles_status():
cwd=dotfiles_path, cwd=dotfiles_path,
capture_output=True, capture_output=True,
text=True, text=True,
check=False,
) )
if result.stdout.strip(): if result.stdout.strip():
@@ -85,6 +89,7 @@ def check_dotfiles_status():
cwd=dotfiles_path, cwd=dotfiles_path,
capture_output=True, capture_output=True,
text=True, text=True,
check=False,
) )
if result.returncode == 0: if result.returncode == 0:
status["commit_hash"] = result.stdout.strip() status["commit_hash"] = result.stdout.strip()
@@ -97,13 +102,14 @@ def check_dotfiles_status():
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
text=True, text=True,
check=False,
) )
if result.returncode == 0: if result.returncode == 0:
status["unpushed"] = len(result.stdout.splitlines()) status["unpushed"] = len(result.stdout.splitlines())
return status return status
except Exception as e: except (OSError, subprocess.SubprocessError) as e:
print(f"Error checking dotfiles status: {str(e)}") print(f"Error checking dotfiles status: {e}")
return None return None
@@ -119,7 +125,7 @@ def get_condensed_status():
count = len(items) count = len(items)
if count > 0: if count > 0:
status_parts.append(f"[!] {count} file(s) in trash") status_parts.append(f"[!] {count} file(s) in trash")
except Exception: except OSError:
pass pass
# Check dotfiles status # Check dotfiles status
@@ -182,6 +188,7 @@ def welcome():
def main(): def main():
"""Main entry point for the hello action."""
logo(continue_after=True) logo(continue_after=True)
welcome() welcome()
return 0 return 0

View File

@@ -1,27 +1,32 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Display help information for the dotfiles system."""
import os import os
import sys import sys
# Import helper functions # Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin")) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from helpers.functions import printfe, println, logo from helpers.functions import printfe, println, logo
def main(): def main():
"""Display help information."""
# Print logo # Print logo
logo(continue_after=True) logo(continue_after=True)
# Print help # Print help
dotfiles_path = os.environ.get("DOTFILES_PATH", os.path.expanduser("~/.dotfiles")) dotfiles_path = os.environ.get("DOTFILES_PATH", os.path.expanduser("~/.dotfiles"))
try: try:
with open(f"{dotfiles_path}/bin/resources/help.txt", "r") as f: with open(
f"{dotfiles_path}/bin/resources/help.txt", "r", encoding="utf-8"
) as f:
help_text = f.read() help_text = f.read()
print(help_text) except OSError as e:
except Exception as e:
printfe("red", f"Error reading help file: {e}") printfe("red", f"Error reading help file: {e}")
return 1 return 1
print(help_text)
println(" ", "cyan") println(" ", "cyan")
return 0 return 0

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Run linters on dotfiles."""
import os import os
import sys import sys
import subprocess import subprocess
@@ -7,7 +9,7 @@ import argparse
from pathlib import Path from pathlib import Path
# Import helper functions # Import helper functions
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__)))) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from helpers.functions import printfe, command_exists from helpers.functions import printfe, command_exists
DOTFILES_ROOT = os.path.expanduser("~/.dotfiles") DOTFILES_ROOT = os.path.expanduser("~/.dotfiles")
@@ -85,16 +87,6 @@ def lint_python(fix=False):
exit_code = 0 exit_code = 0
# Check for pylint
if command_exists("pylint"):
printfe("blue", "Running pylint...")
files_to_lint = [str(f) for f in python_files]
result = subprocess.run(["pylint"] + files_to_lint, check=False)
if result.returncode != 0:
exit_code = 1
else:
printfe("yellow", "pylint is not installed. Skipping Python linting.")
# Check for black # Check for black
if command_exists("black"): if command_exists("black"):
printfe( printfe(
@@ -111,6 +103,16 @@ def lint_python(fix=False):
else: else:
printfe("yellow", "black is not installed. Skipping Python formatting.") printfe("yellow", "black is not installed. Skipping Python formatting.")
# Check for pylint
if command_exists("pylint"):
printfe("blue", "Running pylint...")
files_to_lint = [str(f) for f in python_files]
result = subprocess.run(["pylint"] + files_to_lint, check=False)
if result.returncode != 0:
exit_code = 1
else:
printfe("yellow", "pylint is not installed. Skipping Python linting.")
if not command_exists("pylint") and not command_exists("black"): if not command_exists("pylint") and not command_exists("black"):
printfe( printfe(
"red", "red",

View File

@@ -1,185 +0,0 @@
#!/usr/bin/env python3
import os
import sys
import subprocess
import hashlib
import glob
# Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin"))
from helpers.functions import printfe, run_command
def get_password():
"""Get password from 1Password"""
op_cmd = "op"
# Try to get the password
success, output = run_command(
[op_cmd, "read", "op://Dotfiles/Dotfiles Secrets/password"]
)
if not success:
printfe("red", "Failed to fetch password from 1Password.")
return None
# Check if we need to use a token
if "use 'op item get" in output:
# Extract the token
token = output.split("use 'op item get ")[1].split(" --")[0]
printfe("cyan", f"Got fetch token: {token}")
# Use the token to get the actual password
success, password = run_command(
[op_cmd, "item", "get", token, "--reveal", "--fields", "password"]
)
if not success:
return None
return password
else:
# We already got the password
return output
def prompt_for_password():
"""Ask for password manually"""
import getpass
printfe("cyan", "Enter the password manually: ")
password = getpass.getpass("")
if not password:
printfe("red", "Password cannot be empty.")
sys.exit(1)
printfe("green", "Password entered successfully.")
return password
def calculate_checksum(file_path):
"""Calculate SHA256 checksum of a file"""
sha256_hash = hashlib.sha256()
with open(file_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
def encrypt_folder(folder_path, password):
"""Recursively encrypt files in a folder"""
for item in glob.glob(os.path.join(folder_path, "*")):
# Skip .gpg and .sha256 files
if item.endswith(".gpg") or item.endswith(".sha256"):
continue
# Handle directories recursively
if os.path.isdir(item):
encrypt_folder(item, password)
continue
# Calculate current checksum
current_checksum = calculate_checksum(item)
checksum_file = f"{item}.sha256"
# Check if file changed since last encryption
if os.path.exists(checksum_file):
with open(checksum_file, "r") as f:
previous_checksum = f.read().strip()
if current_checksum == previous_checksum:
continue
# Remove existing .gpg file if it exists
gpg_file = f"{item}.gpg"
if os.path.exists(gpg_file):
os.remove(gpg_file)
# Encrypt the file
printfe("cyan", f"Encrypting {item}...")
cmd = [
"gpg",
"--quiet",
"--batch",
"--yes",
"--symmetric",
"--cipher-algo",
"AES256",
"--armor",
"--passphrase",
password,
"--output",
gpg_file,
item,
]
success, _ = run_command(cmd)
if success:
printfe("cyan", f"Staging {item} for commit...")
run_command(["git", "add", "-f", gpg_file])
# Update checksum file
with open(checksum_file, "w") as f:
f.write(current_checksum)
else:
printfe("red", f"Failed to encrypt {item}")
def decrypt_folder(folder_path, password):
"""Recursively decrypt files in a folder"""
for item in glob.glob(os.path.join(folder_path, "*")):
# Handle .gpg files
if item.endswith(".gpg"):
output_file = item[:-4] # Remove .gpg extension
printfe("cyan", f"Decrypting {item}...")
cmd = [
"gpg",
"--quiet",
"--batch",
"--yes",
"--decrypt",
"--passphrase",
password,
"--output",
output_file,
item,
]
success, _ = run_command(cmd)
if not success:
printfe("red", f"Failed to decrypt {item}")
# Process directories recursively
elif os.path.isdir(item):
printfe("cyan", f"Decrypting folder {item}...")
decrypt_folder(item, password)
def main():
if len(sys.argv) != 2 or sys.argv[1] not in ["encrypt", "decrypt"]:
printfe("red", "Usage: secrets.py [encrypt|decrypt]")
return 1
# Get the dotfiles path
dotfiles_path = os.environ.get("DOTFILES_PATH", os.path.expanduser("~/.dotfiles"))
secrets_path = os.path.join(dotfiles_path, "secrets")
# Get the password
password = get_password()
if not password:
password = prompt_for_password()
# Perform the requested action
if sys.argv[1] == "encrypt":
printfe("cyan", "Encrypting secrets...")
encrypt_folder(secrets_path, password)
else: # decrypt
printfe("cyan", "Decrypting secrets...")
decrypt_folder(secrets_path, password)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,13 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Manage Docker services."""
import os import os
import sys import sys
import subprocess import subprocess
import argparse import argparse
# Import helper functions # Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin")) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from helpers.functions import printfe, println, logo from helpers.functions import printfe, println
# Base directory for Docker services $HOME/services # Base directory for Docker services $HOME/services
SERVICES_DIR = os.path.join(os.path.expanduser("~"), ".services") SERVICES_DIR = os.path.join(os.path.expanduser("~"), ".services")
@@ -42,7 +44,7 @@ def run_docker_compose(args, service_name=None, compose_file=None):
cmd.extend(args) cmd.extend(args)
printfe("blue", f"Running: {' '.join(cmd)}") printfe("blue", f"Running: {' '.join(cmd)}")
result = subprocess.run(cmd) result = subprocess.run(cmd, check=False)
return result.returncode return result.returncode
@@ -107,7 +109,8 @@ def cmd_stop(args):
if protected_running: if protected_running:
printfe( printfe(
"yellow", "yellow",
f"Note: {', '.join(protected_running)} will not be stopped as they are protected services", f"Note: {', '.join(protected_running)} will not be stopped "
"as they are protected services",
) )
if not safe_services: if not safe_services:
@@ -134,19 +137,18 @@ def cmd_stop(args):
else: else:
printfe("green", "\nAll running services stopped successfully") printfe("green", "\nAll running services stopped successfully")
return 0 return 0
else: # Check if trying to stop a protected service
# Check if trying to stop a protected service if args.service in PROTECTED_SERVICES:
if args.service in PROTECTED_SERVICES: printfe(
printfe( "red",
"red", f"Error: {args.service} is a protected service and cannot be stopped",
f"Error: {args.service} is a protected service and cannot be stopped", )
) printfe(
printfe( "yellow",
"yellow", f"The {args.service} service is required for other services to work properly",
f"The {args.service} service is required for other services to work properly", )
) return 1
return 1 return run_docker_compose(["down"], service_name=args.service)
return run_docker_compose(["down"], service_name=args.service)
def cmd_restart(args): def cmd_restart(args):
@@ -206,15 +208,15 @@ def cmd_update(args):
else: else:
printfe("green", "\nAll running services updated successfully") printfe("green", "\nAll running services updated successfully")
return 0 return 0
else:
# The original single-service update logic
# First pull the latest images
pull_result = run_docker_compose(["pull"], service_name=args.service)
if pull_result != 0:
return pull_result
# Then bring the service up with the latest images # The original single-service update logic
return run_docker_compose(["up", "-d"], service_name=args.service) # First pull the latest images
pull_result = run_docker_compose(["pull"], service_name=args.service)
if pull_result != 0:
return pull_result
# Then bring the service up with the latest images
return run_docker_compose(["up", "-d"], service_name=args.service)
def cmd_ps(args): def cmd_ps(args):
@@ -248,6 +250,7 @@ def check_service_running(service_name):
["docker", "compose", "-f", compose_file, "ps", "--quiet"], ["docker", "compose", "-f", compose_file, "ps", "--quiet"],
capture_output=True, capture_output=True,
text=True, text=True,
check=False,
) )
# Count non-empty lines to get container count # Count non-empty lines to get container count
@@ -261,29 +264,33 @@ def get_systemd_timer_status(timer_name):
active_result = subprocess.run( active_result = subprocess.run(
["sudo", "systemctl", "is-active", timer_name], ["sudo", "systemctl", "is-active", timer_name],
capture_output=True, capture_output=True,
text=True text=True,
check=False,
) )
# Check if timer is enabled (will start on boot) # Check if timer is enabled (will start on boot)
enabled_result = subprocess.run( enabled_result = subprocess.run(
["sudo", "systemctl", "is-enabled", timer_name], ["sudo", "systemctl", "is-enabled", timer_name],
capture_output=True, capture_output=True,
text=True text=True,
check=False,
) )
# Check corresponding service status # Check corresponding service status
service_name = timer_name.replace('.timer', '.service') service_name = timer_name.replace(".timer", ".service")
service_result = subprocess.run( service_result = subprocess.run(
["sudo", "systemctl", "is-active", service_name], ["sudo", "systemctl", "is-active", service_name],
capture_output=True, capture_output=True,
text=True text=True,
check=False,
) )
# Get next run time # Get next run time
list_result = subprocess.run( list_result = subprocess.run(
["sudo", "systemctl", "list-timers", timer_name, "--no-legend"], ["sudo", "systemctl", "list-timers", timer_name, "--no-legend"],
capture_output=True, capture_output=True,
text=True text=True,
check=False,
) )
is_active = active_result.returncode == 0 is_active = active_result.returncode == 0
@@ -299,7 +306,7 @@ def get_systemd_timer_status(timer_name):
return is_active, is_enabled, next_run, service_status return is_active, is_enabled, next_run, service_status
def cmd_list(args): def cmd_list(args): # pylint: disable=unused-argument
"""List available Docker services and systemd services""" """List available Docker services and systemd services"""
# Docker services section # Docker services section
if not os.path.exists(SERVICES_DIR): if not os.path.exists(SERVICES_DIR):
@@ -322,7 +329,10 @@ def cmd_list(args):
is_running = container_count > 0 is_running = container_count > 0
if is_running: if is_running:
status = f"[RUNNING - {container_count} container{'s' if container_count > 1 else ''}]" status = (
f"[RUNNING - {container_count} container"
f"{'s' if container_count > 1 else ''}]"
)
color = "green" color = "green"
else: else:
status = "[STOPPED]" status = "[STOPPED]"
@@ -337,8 +347,10 @@ def cmd_list(args):
systemd_timers = ["borg-backup.timer", "borg-local-sync.timer", "dynamic-dns.timer"] systemd_timers = ["borg-backup.timer", "borg-local-sync.timer", "dynamic-dns.timer"]
for timer in systemd_timers: for timer in systemd_timers:
is_active, is_enabled, next_run, service_status = get_systemd_timer_status(timer) is_active, is_enabled, next_run, service_status = get_systemd_timer_status(
service_name = timer.replace('.timer', '') timer
)
service_name = timer.replace(".timer", "")
if service_status in ["activating", "active"]: if service_status in ["activating", "active"]:
# Service is currently running # Service is currently running
@@ -360,6 +372,7 @@ def cmd_list(args):
def main(): def main():
"""Main entry point for managing Docker services."""
parser = argparse.ArgumentParser(description="Manage Docker services") parser = argparse.ArgumentParser(description="Manage Docker services")
subparsers = parser.add_subparsers(dest="command", help="Command to run") subparsers = parser.add_subparsers(dest="command", help="Command to run")

View File

@@ -1,27 +1,39 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Generate export commands for Borg environment variables."""
import os import os
import sys import sys
import subprocess import subprocess
# Add the helpers directory to the path # Add the bin directory to the path
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'helpers')) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from functions import printfe from helpers.functions import printfe
def get_borg_passphrase(): def get_borg_passphrase():
"""Get Borg passphrase from 1Password""" """Get Borg passphrase from 1Password"""
try: try:
result = subprocess.run( result = subprocess.run(
["op", "item", "get", "Borg Backup", "--vault=Dotfiles", "--fields=password", "--reveal"], [
"op",
"item",
"get",
"Borg Backup",
"--vault=Dotfiles",
"--fields=password",
"--reveal",
],
capture_output=True, capture_output=True,
text=True, text=True,
check=True check=True,
) )
return result.stdout.strip() return result.stdout.strip()
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
printfe("red", "Error: Failed to retrieve Borg passphrase from 1Password") printfe("red", "Error: Failed to retrieve Borg passphrase from 1Password")
return None return None
def main(): def main():
"""Generate export commands for Borg environment variables""" """Generate export commands for Borg environment variables"""
args = sys.argv[1:] if len(sys.argv) > 1 else [] args = sys.argv[1:] if len(sys.argv) > 1 else []
@@ -33,12 +45,12 @@ def main():
# Generate the export commands # Generate the export commands
exports = [ exports = [
f'export BORG_REPO="/mnt/object_storage/borg-repo"', 'export BORG_REPO="/mnt/object_storage/borg-repo"',
f'export BORG_PASSPHRASE="{passphrase}"', f'export BORG_PASSPHRASE="{passphrase}"',
f'export BORG_CACHE_DIR="/home/menno/.config/borg/cache"', 'export BORG_CACHE_DIR="/home/menno/.config/borg/cache"',
f'export BORG_CONFIG_DIR="/home/menno/.config/borg/config"', 'export BORG_CONFIG_DIR="/home/menno/.config/borg/config"',
f'export BORG_SECURITY_DIR="/home/menno/.config/borg/security"', 'export BORG_SECURITY_DIR="/home/menno/.config/borg/security"',
f'export BORG_KEYS_DIR="/home/menno/.config/borg/keys"' 'export BORG_KEYS_DIR="/home/menno/.config/borg/keys"',
] ]
# Check if we're being eval'd (no arguments and stdout is a pipe) # Check if we're being eval'd (no arguments and stdout is a pipe)
@@ -63,19 +75,17 @@ def main():
print() print()
printfe("yellow", "Or copy and paste these exports:") printfe("yellow", "Or copy and paste these exports:")
print() print()
# Output the export commands # Output the export commands
for export_cmd in exports: for export_cmd in exports:
print(export_cmd) print(export_cmd)
print() print()
printfe("cyan", "📋 Borg commands (use with sudo -E):") printfe("cyan", "📋 Borg commands (use with sudo -E):")
printfe("white", " sudo -E borg list # List all backups") printfe("white", " sudo -E borg list # List all backups")
printfe("white", " sudo -E borg info # Repository info") printfe("white", " sudo -E borg info # Repository info")
printfe("white", " sudo -E borg list ::archive-name # List files in backup") printfe("white", " sudo -E borg list ::archive-name # List files in backup")
printfe("white", " sudo -E borg mount . ~/borg-mount # Mount as filesystem") printfe("white", " sudo -E borg mount . ~/borg-mount # Mount as filesystem")
return 0 return 0
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(main())

View File

@@ -1,22 +1,28 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Display status of systemd timers."""
import os import os
import subprocess import subprocess
import sys import sys
# Add the helpers directory to the path # Add the bin directory to the path
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'helpers')) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from functions import printfe from helpers.functions import printfe
def run_command(cmd, capture_output=True): def run_command(cmd, capture_output=True):
"""Run a command and return the result""" """Run a command and return the result"""
try: try:
result = subprocess.run(cmd, shell=True, capture_output=capture_output, text=True) result = subprocess.run(
cmd, shell=True, capture_output=capture_output, text=True
)
return result return result
except Exception as e: except Exception as e:
printfe("red", f"Error running command: {e}") printfe("red", f"Error running command: {e}")
return None return None
def show_timer_status(timer_name, system_level=True): def show_timer_status(timer_name, system_level=True):
"""Show concise status for a specific timer""" """Show concise status for a specific timer"""
cmd_prefix = "sudo systemctl" if system_level else "systemctl --user" cmd_prefix = "sudo systemctl" if system_level else "systemctl --user"
@@ -24,10 +30,12 @@ def show_timer_status(timer_name, system_level=True):
# Get timer status # Get timer status
status_cmd = f"{cmd_prefix} is-active {timer_name}" status_cmd = f"{cmd_prefix} is-active {timer_name}"
status_result = run_command(status_cmd) status_result = run_command(status_cmd)
timer_status = "active" if status_result and status_result.returncode == 0 else "inactive" timer_status = (
"active" if status_result and status_result.returncode == 0 else "inactive"
)
# Get corresponding service status # Get corresponding service status
service_name = timer_name.replace('.timer', '.service') service_name = timer_name.replace(".timer", ".service")
service_cmd = f"{cmd_prefix} is-active {service_name}" service_cmd = f"{cmd_prefix} is-active {service_name}"
service_result = run_command(service_cmd) service_result = run_command(service_cmd)
service_status = service_result.stdout.strip() if service_result else "unknown" service_status = service_result.stdout.strip() if service_result else "unknown"
@@ -43,7 +51,7 @@ def show_timer_status(timer_name, system_level=True):
next_run = f"{parts[0]} {parts[1]} {parts[2]} ({parts[3]})" next_run = f"{parts[0]} {parts[1]} {parts[2]} ({parts[3]})"
# Format output based on service status # Format output based on service status
service_short = service_name.replace('.service', '') service_short = service_name.replace(".service", "")
if service_status in ["activating", "active"]: if service_status in ["activating", "active"]:
# Service is currently running # Service is currently running
@@ -63,6 +71,7 @@ def show_timer_status(timer_name, system_level=True):
printfe(status_color, f"{symbol} {service_short:<12} {status_text}") printfe(status_color, f"{symbol} {service_short:<12} {status_text}")
def show_examples(): def show_examples():
"""Show example commands for checking services and logs""" """Show example commands for checking services and logs"""
printfe("cyan", "=== Useful Commands ===") printfe("cyan", "=== Useful Commands ===")
@@ -92,6 +101,7 @@ def show_examples():
print(" sudo systemctl list-timers") print(" sudo systemctl list-timers")
print() print()
def main(): def main():
"""Main timers action""" """Main timers action"""
args = sys.argv[1:] if len(sys.argv) > 1 else [] args = sys.argv[1:] if len(sys.argv) > 1 else []
@@ -103,7 +113,7 @@ def main():
timers = [ timers = [
("borg-backup.timer", True), ("borg-backup.timer", True),
("borg-local-sync.timer", True), ("borg-local-sync.timer", True),
("dynamic-dns.timer", True) ("dynamic-dns.timer", True),
] ]
for timer_name, system_level in timers: for timer_name, system_level in timers:
@@ -118,5 +128,6 @@ def main():
return 0 return 0
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(main())

View File

@@ -1,12 +1,14 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Update the dotfiles system."""
import os import os
import sys import sys
import subprocess import subprocess
import argparse import argparse
# Import helper functions # Import helper functions
sys.path.append(os.path.join(os.path.expanduser("~/.dotfiles"), "bin")) sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from helpers.functions import printfe, run_command from helpers.functions import printfe, run_command
@@ -29,6 +31,9 @@ def help_message():
" --full-speed, -F Upgrade packages and use all available cores for compilation. (Default: 8 cores)", " --full-speed, -F Upgrade packages and use all available cores for compilation. (Default: 8 cores)",
) )
printfe("green", " --skip-check, -s Skip checking for dotfiles updates.") printfe("green", " --skip-check, -s Skip checking for dotfiles updates.")
printfe(
"green", " --system, -S Update system packages (flatpak, brew, apt, etc.)"
)
printfe("green", " --help, -h Display this help message.") printfe("green", " --help, -h Display this help message.")
return 0 return 0
@@ -230,13 +235,80 @@ def get_sudo_password_from_1password(username, hostname):
printfe("red", f"Failed to fetch password from 1Password: {e.stderr.strip()}") printfe("red", f"Failed to fetch password from 1Password: {e.stderr.strip()}")
return None return None
except FileNotFoundError: except FileNotFoundError:
printfe("red", "Error: 'op' command not found. Please ensure 1Password CLI is installed and in your PATH.") printfe(
"red",
"Error: 'op' command not found. Please ensure 1Password CLI is installed and in your PATH.",
)
return None return None
except Exception as e: except Exception as e:
printfe("red", f"An unexpected error occurred while fetching password: {e}") printfe("red", f"An unexpected error occurred while fetching password: {e}")
return None return None
def get_distro():
"""Detect the Linux distribution."""
try:
with open("/etc/os-release", "r") as f:
for line in f:
if line.startswith("ID="):
return line.split("=", 1)[1].strip().strip('"').lower()
except:
return None
def update_system_packages(sudo_password):
"""Update system packages using available package managers."""
# System package updates
printfe("cyan", "Checking for system package updates...")
# Check for flatpak
status, _ = run_command(["which", "flatpak"], shell=False)
if status:
printfe("cyan", "Updating Flatpak packages...")
result = subprocess.run(["flatpak", "update", "-y"], check=False)
if result.returncode != 0:
printfe("yellow", "Flatpak update failed.")
# Check for brew
status, _ = run_command(["which", "brew"], shell=False)
if status:
printfe("cyan", "Updating Homebrew packages...")
result = subprocess.run(["brew", "update"], check=False)
if result.returncode == 0:
result = subprocess.run(["brew", "upgrade"], check=False)
if result.returncode != 0:
printfe("yellow", "Brew upgrade failed.")
else:
printfe("yellow", "Brew update failed.")
# Distro specific updates
distro = get_distro()
if distro:
printfe("cyan", f"Detected distro: {distro}")
sudo_cmd = ["sudo", "-S"]
if distro in ["ubuntu", "debian"]:
cmds = [["apt", "update"], ["apt", "upgrade", "-y"]]
elif distro == "arch":
cmds = [["pacman", "-Syu", "--noconfirm"]]
elif distro in ["fedora", "rhel", "centos"]:
cmds = [["yum", "update", "-y"]]
else:
cmds = []
for cmd in cmds:
full_cmd = sudo_cmd + cmd
printfe("cyan", f"Running: {' '.join(full_cmd)}")
if sudo_password:
result = subprocess.run(
full_cmd, input=sudo_password + "\n", text=True, check=False
)
else:
result = subprocess.run(full_cmd, check=False)
if result.returncode != 0:
printfe("yellow", "Command failed.")
else:
printfe("yellow", "Could not detect distro, skipping package manager updates.")
def main(): def main():
# Parse arguments # Parse arguments
parser = argparse.ArgumentParser(add_help=False) parser = argparse.ArgumentParser(add_help=False)
@@ -251,9 +323,7 @@ def main():
action="store_true", action="store_true",
help="Upgrade Ansible packages with verbose output", help="Upgrade Ansible packages with verbose output",
) )
parser.add_argument( parser.add_argument("--tags", type=str, help="Run only specific Ansible tags")
"--tags", type=str, help="Run only specific Ansible tags"
)
parser.add_argument( parser.add_argument(
"--full-speed", "-F", action="store_true", help="Use all available cores" "--full-speed", "-F", action="store_true", help="Use all available cores"
) )
@@ -262,7 +332,17 @@ def main():
) )
parser.add_argument( parser.add_argument(
"--skip-check", "-s", action="store_true", help="Skip checking for dotfiles updates" "--skip-check",
"-s",
action="store_true",
help="Skip checking for dotfiles updates",
)
parser.add_argument(
"--system",
"-S",
action="store_true",
help="Update system packages (flatpak, brew, apt, etc.)",
) )
args = parser.parse_args() args = parser.parse_args()
@@ -270,10 +350,17 @@ def main():
if args.help: if args.help:
return help_message() return help_message()
username = os.environ.get("USER", os.environ.get("USERNAME", "user"))
hostname = os.uname().nodename
sudo_password = None
if os.isatty(sys.stdin.fileno()):
sudo_password = get_sudo_password_from_1password(username, hostname)
# If no specific option provided, run all # If no specific option provided, run all
if not args.ha and not args.ansible and not args.ansible_verbose: if not args.ha and not args.ansible and not args.ansible_verbose:
args.ha = True args.ha = True
args.ansible = True args.ansible = True
args.system = True
# If ansible_verbose is set, also set ansible # If ansible_verbose is set, also set ansible
if args.ansible_verbose: if args.ansible_verbose:
@@ -287,9 +374,13 @@ def main():
else: else:
printfe("yellow", "Skipping dotfiles repository update check (--skip-check).") printfe("yellow", "Skipping dotfiles repository update check (--skip-check).")
if args.system:
update_system_packages(sudo_password)
# Set cores and jobs based on full-speed flag # Set cores and jobs based on full-speed flag
if args.full_speed: if args.full_speed:
import multiprocessing import multiprocessing
cores = jobs = multiprocessing.cpu_count() cores = jobs = multiprocessing.cpu_count()
else: else:
cores = 8 cores = 8
@@ -344,7 +435,7 @@ def main():
str(jobs), str(jobs),
] ]
result = subprocess.run(cmd, env=env) result = subprocess.run(cmd, env=env, check=False)
if result.returncode != 0: if result.returncode != 0:
printfe("red", "Failed to upgrade Home Manager packages.") printfe("red", "Failed to upgrade Home Manager packages.")
return 1 return 1
@@ -357,8 +448,6 @@ def main():
dotfiles_path = os.environ.get( dotfiles_path = os.environ.get(
"DOTFILES_PATH", os.path.expanduser("~/.dotfiles") "DOTFILES_PATH", os.path.expanduser("~/.dotfiles")
) )
hostname = os.uname().nodename
username = os.environ.get("USER", os.environ.get("USERNAME", "user"))
# Ensure required collections are installed # Ensure required collections are installed
if not ensure_ansible_collections(): if not ensure_ansible_collections():
@@ -383,16 +472,20 @@ def main():
hostname, hostname,
] ]
sudo_password = None
if not os.isatty(sys.stdin.fileno()): if not os.isatty(sys.stdin.fileno()):
printfe("yellow", "Warning: Not running in an interactive terminal. Cannot fetch password from 1Password.") printfe(
"yellow",
"Warning: Not running in an interactive terminal. Cannot fetch password from 1Password.",
)
ansible_cmd.append("--ask-become-pass") ansible_cmd.append("--ask-become-pass")
else: else:
sudo_password = get_sudo_password_from_1password(username, hostname)
if sudo_password: if sudo_password:
ansible_cmd.extend(["--become-pass-file", "-"]) ansible_cmd.extend(["--become-pass-file", "-"])
else: else:
printfe("yellow", "Could not fetch password from 1Password. Falling back to --ask-become-pass.") printfe(
"yellow",
"Could not fetch password from 1Password. Falling back to --ask-become-pass.",
)
ansible_cmd.append("--ask-become-pass") ansible_cmd.append("--ask-become-pass")
if args.tags: if args.tags:
@@ -402,13 +495,15 @@ def main():
ansible_cmd.append("-vvv") ansible_cmd.append("-vvv")
# Debug: Show the command being executed # Debug: Show the command being executed
printfe("yellow", f"Debug: Executing command: {' '.join(ansible_cmd)}") printfe("cyan", f"Executing command: {' '.join(ansible_cmd)}")
# Execute the Ansible command, passing password via stdin if available # Execute the Ansible command, passing password via stdin if available
if sudo_password: if sudo_password:
result = subprocess.run(ansible_cmd, input=sudo_password.encode('utf-8')) result = subprocess.run(
ansible_cmd, input=sudo_password + "\n", text=True, check=False
)
else: else:
result = subprocess.run(ansible_cmd) result = subprocess.run(ansible_cmd, check=False)
if result.returncode != 0: if result.returncode != 0:
printfe("red", "Failed to upgrade Ansible packages.") printfe("red", "Failed to upgrade Ansible packages.")

View File

@@ -5,10 +5,12 @@ import signal
import subprocess import subprocess
import sys import sys
def signal_handler(sig, frame): def signal_handler(sig, frame):
print('Exiting.') print("Exiting.")
sys.exit(0) sys.exit(0)
signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGINT, signal_handler)
# Script constants # Script constants
@@ -22,51 +24,54 @@ from helpers.functions import printfe, ensure_dependencies
ensure_dependencies() ensure_dependencies()
def run_script(script_path, args): def run_script(script_path, args):
"""Run an action script with the given arguments""" """Run an action script with the given arguments"""
if not os.path.isfile(script_path) or not os.access(script_path, os.X_OK): if not os.path.isfile(script_path) or not os.access(script_path, os.X_OK):
printfe("red", f"Error: Script not found or not executable: {script_path}") printfe("red", f"Error: Script not found or not executable: {script_path}")
return 1 return 1
result = subprocess.run([script_path] + args, env={**os.environ, "DOTFILES_PATH": DOTFILES_PATH}) result = subprocess.run(
[script_path] + args, env={**os.environ, "DOTFILES_PATH": DOTFILES_PATH}
)
return result.returncode return result.returncode
def update(args): def update(args):
"""Run the update action""" """Run the update action"""
return run_script(f"{DOTFILES_BIN}/actions/update.py", args) return run_script(f"{DOTFILES_BIN}/actions/update.py", args)
def hello(args): def hello(args):
"""Run the hello action""" """Run the hello action"""
return run_script(f"{DOTFILES_BIN}/actions/hello.py", args) return run_script(f"{DOTFILES_BIN}/actions/hello.py", args)
def help(args): def help(args):
"""Run the help action""" """Run the help action"""
return run_script(f"{DOTFILES_BIN}/actions/help.py", args) return run_script(f"{DOTFILES_BIN}/actions/help.py", args)
def secrets(args):
"""Run the secrets action"""
return run_script(f"{DOTFILES_BIN}/actions/secrets.py", args)
def auto_start(args):
"""Run the auto-start action"""
return run_script(f"{DOTFILES_BIN}/actions/auto-start.py", args)
def service(args): def service(args):
"""Run the service/docker action""" """Run the service/docker action"""
return run_script(f"{DOTFILES_BIN}/actions/service.py", args) return run_script(f"{DOTFILES_BIN}/actions/service.py", args)
def lint(args): def lint(args):
"""Run the lint action""" """Run the lint action"""
return run_script(f"{DOTFILES_BIN}/actions/lint.py", args) return run_script(f"{DOTFILES_BIN}/actions/lint.py", args)
def timers(args): def timers(args):
"""Run the timers action""" """Run the timers action"""
return run_script(f"{DOTFILES_BIN}/actions/timers.py", args) return run_script(f"{DOTFILES_BIN}/actions/timers.py", args)
def source(args): def source(args):
"""Run the source action""" """Run the source action"""
return run_script(f"{DOTFILES_BIN}/actions/source.py", args) return run_script(f"{DOTFILES_BIN}/actions/source.py", args)
def ensure_git_hooks(): def ensure_git_hooks():
"""Ensure git hooks are correctly set up""" """Ensure git hooks are correctly set up"""
hooks_dir = os.path.join(DOTFILES_ROOT, ".git/hooks") hooks_dir = os.path.join(DOTFILES_ROOT, ".git/hooks")
@@ -74,14 +79,19 @@ def ensure_git_hooks():
# Validate target directory exists # Validate target directory exists
if not os.path.isdir(target_link): if not os.path.isdir(target_link):
printfe("red", f"Error: Git hooks source directory does not exist: {target_link}") printfe(
"red", f"Error: Git hooks source directory does not exist: {target_link}"
)
return 1 return 1
# Handle existing symlink # Handle existing symlink
if os.path.islink(hooks_dir): if os.path.islink(hooks_dir):
current_link = os.readlink(hooks_dir) current_link = os.readlink(hooks_dir)
if current_link != target_link: if current_link != target_link:
printfe("yellow", "Incorrect git hooks symlink found. Removing and recreating...") printfe(
"yellow",
"Incorrect git hooks symlink found. Removing and recreating...",
)
os.remove(hooks_dir) os.remove(hooks_dir)
else: else:
return 0 return 0
@@ -90,6 +100,7 @@ def ensure_git_hooks():
if os.path.isdir(hooks_dir) and not os.path.islink(hooks_dir): if os.path.isdir(hooks_dir) and not os.path.islink(hooks_dir):
printfe("yellow", "Removing existing hooks directory...") printfe("yellow", "Removing existing hooks directory...")
import shutil import shutil
shutil.rmtree(hooks_dir) shutil.rmtree(hooks_dir)
# Create new symlink # Create new symlink
@@ -101,6 +112,7 @@ def ensure_git_hooks():
printfe("red", f"Failed to create git hooks symlink: {e}") printfe("red", f"Failed to create git hooks symlink: {e}")
return 1 return 1
def main(): def main():
# Ensure we're in the correct directory # Ensure we're in the correct directory
if not os.path.isdir(DOTFILES_ROOT): if not os.path.isdir(DOTFILES_ROOT):
@@ -119,18 +131,45 @@ def main():
"update": update, "update": update,
"help": help, "help": help,
"hello": hello, "hello": hello,
"secrets": secrets,
"auto-start": auto_start,
"service": service, "service": service,
"lint": lint, "lint": lint,
"timers": timers, "timers": timers,
"source": source "source": source,
} }
if command in commands: if command in commands:
return commands[command](args) return commands[command](args)
else: else:
# For invalid commands, show error after logo
if command != "help":
from helpers.functions import logo
logo(continue_after=True)
print()
printfe("red", f"✗ Error: Unknown command '{command}'")
# Provide helpful hints for common mistakes
if command == "ls":
printfe("yellow", " Hint: Did you mean 'dotf service ls'?")
elif command == "list":
printfe("yellow", " Hint: Did you mean 'dotf service list'?")
print()
# Now print help text without logo
dotfiles_path = os.environ.get(
"DOTFILES_PATH", os.path.expanduser("~/.dotfiles")
)
try:
with open(
f"{dotfiles_path}/bin/resources/help.txt", "r", encoding="utf-8"
) as f:
print(f.read())
except OSError as e:
printfe("red", f"Error reading help file: {e}")
return 1
return 1
return help([]) return help([])
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(main())

0
bin/helpers/__init__.py Normal file
View File

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Helper functions for the dotfiles system."""
import sys import sys
import subprocess import subprocess
import math import math
@@ -7,6 +9,7 @@ import random
import shutil import shutil
import datetime import datetime
try: try:
import pyfiglet import pyfiglet
except ImportError: except ImportError:
@@ -157,7 +160,7 @@ def ensure_dependencies():
if missing_packages: if missing_packages:
printfe("yellow", f"Missing dependencies: {', '.join(missing_packages)}") printfe("yellow", f"Missing dependencies: {', '.join(missing_packages)}")
install = input("Would you like to install them now? (y/n): ").lower() install = input("Would you like to install them now? (y/n): ").lower()
if install == "y" or install == "yes": if install in ("y", "yes"):
printfe("cyan", "Installing missing dependencies...") printfe("cyan", "Installing missing dependencies...")
for package in missing_packages: for package in missing_packages:
printfe("blue", f"Installing {package}...") printfe("blue", f"Installing {package}...")
@@ -171,7 +174,6 @@ def ensure_dependencies():
printfe("green", "All dependencies have been processed") printfe("green", "All dependencies have been processed")
return True return True
else: printfe("yellow", "Skipping dependency installation")
printfe("yellow", "Skipping dependency installation") return False
return False
return True return True

View File

@@ -7,11 +7,6 @@ Usage: dotf [OPTIONS] [ARGS]
--ansible-verbose Upgrade Ansible packages with verbose output (-vvv) --ansible-verbose Upgrade Ansible packages with verbose output (-vvv)
--full-speed, -F Use all available cores for compilation (Default: 8 cores) --full-speed, -F Use all available cores for compilation (Default: 8 cores)
secrets: Encrypt and decrypt secrets.
Commands:
encrypt Encrypt all files in the secrets folder
decrypt Decrypt all .gpg files in the secrets folder
service: Manage Docker services for development. service: Manage Docker services for development.
Commands: Commands:
start SERVICE Start a Docker service start SERVICE Start a Docker service
@@ -30,6 +25,5 @@ Usage: dotf [OPTIONS] [ARGS]
--python Run only Python linters (pylint, black) --python Run only Python linters (pylint, black)
--fix Auto-fix issues where possible --fix Auto-fix issues where possible
auto-start: Start a set of pre-defined applications.
hello: Shows the welcome message for the terminal. hello: Shows the welcome message for the terminal.
help: Shows this help message help: Shows this help message

0
config/autostart/.gitkeep Executable file
View File

View File

@@ -0,0 +1,11 @@
[Desktop Entry]
Name=Nextcloud
GenericName=File Synchronizer
Exec="/usr/bin/nextcloud" --background
Terminal=false
Icon=Nextcloud
Categories=Network
Type=Application
StartupNotify=false
X-GNOME-Autostart-enabled=true
X-GNOME-Autostart-Delay=10

View File

@@ -0,0 +1,8 @@
[Desktop Entry]
Type=Application
Name=Equibop
Comment=Equibop autostart script
Exec="/opt/Equibop/equibop"
StartupNotify=false
Terminal=false
Icon=vesktop

View File

@@ -0,0 +1,15 @@
[Desktop Entry]
Icon=/home/menno/.jetbrains-toolbox/toolbox.svg
Exec=/home/menno/.jetbrains-toolbox/jetbrains-toolbox --minimize
Version=1.0
Type=Application
Categories=Development
Name=JetBrains Toolbox
StartupWMClass=jetbrains-toolbox
Terminal=false
MimeType=x-scheme-handler/jetbrains;
X-GNOME-Autostart-enabled=true
StartupNotify=false
X-GNOME-Autostart-Delay=10
X-MATE-Autostart-Delay=10
X-KDE-autostart-after=panel

View File

@@ -0,0 +1,2 @@
[MIME Cache]
x-scheme-handler/jetbrains=jetbrains-toolbox.desktop;

View File

@@ -1,4 +1,7 @@
{ config, pkgs, lib, ... }: {
config,
...
}:
{ {
programs.bash = { programs.bash = {
@@ -8,7 +11,10 @@
# History configuration # History configuration
historySize = 1000; historySize = 1000;
historyFileSize = 2000; historyFileSize = 2000;
historyControl = [ "ignoredups" "ignorespace" ]; historyControl = [
"ignoredups"
"ignorespace"
];
# Bash options and extra configuration # Bash options and extra configuration
bashrcExtra = '' bashrcExtra = ''
@@ -25,6 +31,45 @@
shopt -s no_empty_cmd_completion shopt -s no_empty_cmd_completion
shopt -s nocaseglob shopt -s nocaseglob
# Set various environment variables
export NIXPKGS_ALLOW_INSECURE=1
export NIXPKGS_ALLOW_UNFREE=1
export DOTFILES_PATH="${config.home.homeDirectory}/.dotfiles"
export EDITOR="code --wait"
export STARSHIP_ENABLE_RIGHT_PROMPT="true"
export STARSHIP_ENABLE_BASH_COMPLETION="true"
export XDG_DATA_DIRS="/usr/share:/var/lib/flatpak/exports/share:${config.home.homeDirectory}/.local/share/flatpak/exports/share"
export BUN_INSTALL="$HOME/.bun"
# Source .profile (If exists)
if [ -f "${config.home.homeDirectory}/.profile" ]; then
source "${config.home.homeDirectory}/.profile"
fi
# Source .bashrc.local (If exists)
if [ -f "${config.home.homeDirectory}/.bashrc.local" ]; then
source "${config.home.homeDirectory}/.bashrc.local"
fi
# Homebrew (if installed)
if [ -d /home/linuxbrew/.linuxbrew ]; then
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
fi
# PyEnv (if installed)
if [ -d "${config.home.homeDirectory}/.pyenv" ]; then
export PYENV_ROOT="${config.home.homeDirectory}/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
fi
# NVM (if installed)
if [ -d "$HOME/.nvm" ]; then
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
fi
# Detect distribution and set CGO_CFLAGS for Pop!_OS # Detect distribution and set CGO_CFLAGS for Pop!_OS
if [ -f /etc/os-release ]; then if [ -f /etc/os-release ]; then
distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"') distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"')
@@ -37,10 +82,13 @@
if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then
[ -f "${config.home.homeDirectory}/.agent-bridge.sh" ] && source "${config.home.homeDirectory}/.agent-bridge.sh" [ -f "${config.home.homeDirectory}/.agent-bridge.sh" ] && source "${config.home.homeDirectory}/.agent-bridge.sh"
alias winget='winget.exe' alias winget='winget.exe'
alias ssh-add="ssh-add.exe"
alias git="git.exe"
fi fi
# Set SSH_AUTH_SOCK to 1Password agent if not already set # Set SSH_AUTH_SOCK to 1Password agent if not already set
if [ -z "$SSH_AUTH_SOCK" ]; then # Also block /run/user/1000/gnupg/S.gpg-agent.ssh and override with 1Password
if [ -z "$SSH_AUTH_SOCK" ] || [[ "$SSH_AUTH_SOCK" == *"gnupg/S.gpg-agent.ssh"* ]]; then
export SSH_AUTH_SOCK=~/.1password/agent.sock export SSH_AUTH_SOCK=~/.1password/agent.sock
fi fi
@@ -68,11 +116,6 @@
fi fi
fi fi
# Source nix home-manager session variables
if [ -f "${config.home.homeDirectory}/.nix-profile/etc/profile.d/hm-session-vars.sh" ]; then
. "${config.home.homeDirectory}/.nix-profile/etc/profile.d/hm-session-vars.sh"
fi
# Source ble.sh if available and configure fzf history search # Source ble.sh if available and configure fzf history search
if [[ -f "${config.home.homeDirectory}/.nix-profile/share/blesh/ble.sh" ]]; then if [[ -f "${config.home.homeDirectory}/.nix-profile/share/blesh/ble.sh" ]]; then
source "${config.home.homeDirectory}/.nix-profile/share/blesh/ble.sh" source "${config.home.homeDirectory}/.nix-profile/share/blesh/ble.sh"
@@ -96,11 +139,6 @@
bind -x '"\C-r": fzf_history_search' bind -x '"\C-r": fzf_history_search'
fi fi
# Source local bashrc if it exists
if [ -f "${config.home.homeDirectory}/.bashrc.local" ]; then
source "${config.home.homeDirectory}/.bashrc.local"
fi
# Display welcome message for interactive shells # Display welcome message for interactive shells
if [ -t 1 ]; then if [ -t 1 ]; then
command -v helloworld &> /dev/null && helloworld command -v helloworld &> /dev/null && helloworld
@@ -109,18 +147,12 @@
# Shell aliases # Shell aliases
shellAliases = { shellAliases = {
# Folder navigation
"." = "cd .";
".." = "cd ..";
"..." = "cd ../..";
"...." = "cd ../../..";
"....." = "cd ../../../..";
# Docker Compose alias (for old scripts) # Docker Compose alias (for old scripts)
"docker-compose" = "docker compose"; "docker-compose" = "docker compose";
# Modern tools aliases # Modern tools aliases
"l" = "eza --header --long --git --group-directories-first --group --icons --color=always --sort=name --hyperlink -o --no-permissions"; "l" =
"eza --header --long --git --group-directories-first --group --icons --color=always --sort=name --hyperlink -o --no-permissions";
"ll" = "l"; "ll" = "l";
"la" = "l -a"; "la" = "l -a";
"cat" = "bat"; "cat" = "bat";
@@ -139,8 +171,10 @@
"dcps" = "docker compose ps"; "dcps" = "docker compose ps";
"dcpr" = "dcp && dcd && dcu -d && dcl -f"; "dcpr" = "dcp && dcd && dcu -d && dcl -f";
"dcr" = "dcd && dcu -d && dcl -f"; "dcr" = "dcd && dcu -d && dcl -f";
"ddpul" = "docker compose down && docker compose pull && docker compose up -d && docker compose logs -f"; "ddpul" =
"docker-nuke" = "docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker system prune --all --volumes --force && docker volume prune --force"; "docker compose down && docker compose pull && docker compose up -d && docker compose logs -f";
"docker-nuke" =
"docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker system prune --all --volumes --force && docker volume prune --force";
# Git aliases # Git aliases
"g" = "git"; "g" = "git";
@@ -158,32 +192,16 @@
# Kubernetes aliases # Kubernetes aliases
"kubectl" = "minikube kubectl --"; "kubectl" = "minikube kubectl --";
# Editor aliases
"zeditor" = "${config.home.homeDirectory}/.local/bin/zed";
"zed" = "${config.home.homeDirectory}/.local/bin/zed";
# SSH alias # SSH alias
"ssh" = "${config.home.homeDirectory}/.local/bin/smart-ssh"; "ssh" = "${config.home.homeDirectory}/.local/bin/smart-ssh";
# Utility aliases # Utility aliases
"random" = "openssl rand -base64"; "random" = "openssl rand -base64";
};
# Session variables # Folder navigation
sessionVariables = { ".." = "cd ..";
# Basic environment "..." = "cd ../..";
DOTFILES_PATH = "${config.home.homeDirectory}/.dotfiles"; "...." = "cd ../../..";
# Nix configuration
NIXPKGS_ALLOW_UNFREE = "1";
NIXPKGS_ALLOW_INSECURE = "1";
# XDG configuration
XDG_DATA_DIRS = "$XDG_DATA_DIRS:/usr/share:/var/lib/flatpak/exports/share:$HOME/.local/share/flatpak/exports/share";
# Starship configuration
STARSHIP_ENABLE_RIGHT_PROMPT = "true";
STARSHIP_ENABLE_BASH_CONTINUATION = "true";
}; };
# Profile extra (runs for login shells) # Profile extra (runs for login shells)
@@ -191,26 +209,15 @@
# PATH manipulation # PATH manipulation
export PATH="$PATH:${config.home.homeDirectory}/.local/bin" export PATH="$PATH:${config.home.homeDirectory}/.local/bin"
export PATH="$PATH:${config.home.homeDirectory}/.cargo/bin" export PATH="$PATH:${config.home.homeDirectory}/.cargo/bin"
export PATH="$PATH:$DOTFILES_PATH/bin" export PATH="$PATH:${config.home.homeDirectory}/.dotfiles/bin"
export PATH="/usr/bin:$PATH" export PATH="/usr/bin:$PATH"
export PATH="$BUN_INSTALL/bin:$PATH"
# PKG_CONFIG_PATH # PKG_CONFIG_PATH
if [ -d /usr/lib/pkgconfig ]; then if [ -d /usr/lib/pkgconfig ]; then
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/share/pkgconfig:$PKG_CONFIG_PATH export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/share/pkgconfig:$PKG_CONFIG_PATH
fi fi
# Spicetify
if [ -d "${config.home.homeDirectory}/.spicetify" ]; then
export PATH="$PATH:${config.home.homeDirectory}/.spicetify"
fi
# Pyenv
if [ -d "${config.home.homeDirectory}/.pyenv" ]; then
export PYENV_ROOT="${config.home.homeDirectory}/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
fi
# pnpm # pnpm
if [ -d "${config.home.homeDirectory}/.local/share/pnpm" ]; then if [ -d "${config.home.homeDirectory}/.local/share/pnpm" ]; then
export PATH="$PATH:${config.home.homeDirectory}/.local/share/pnpm" export PATH="$PATH:${config.home.homeDirectory}/.local/share/pnpm"
@@ -234,6 +241,11 @@
if [ -d "${config.home.homeDirectory}/Projects/Work" ]; then if [ -d "${config.home.homeDirectory}/Projects/Work" ]; then
export TRADAWARE_DEVOPS=true export TRADAWARE_DEVOPS=true
fi fi
# Japanese input
export GTK_IM_MODULE=fcitx5
export QT_IM_MODULE=fcitx5
export XMODIFIERS="@im=fcitx5"
''; '';
# Interactive shell specific configuration # Interactive shell specific configuration
@@ -267,5 +279,4 @@
]; ];
}; };
} }

View File

@@ -4,5 +4,6 @@
./bash.nix ./bash.nix
./git.nix ./git.nix
./starship.nix ./starship.nix
./ssh.nix
]; ];
} }

View File

@@ -1,4 +1,9 @@
{ config, pkgs, lib, ... }: {
config,
pkgs,
lib,
...
}:
{ {
programs.git = { programs.git = {
@@ -7,6 +12,9 @@
# Basic configuration # Basic configuration
userName = "Menno van Leeuwen"; userName = "Menno van Leeuwen";
userEmail = "menno@vleeuwen.me"; userEmail = "menno@vleeuwen.me";
signing = lib.mkIf (!config.isServer) {
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM+sKpcREOUjwMMSzEWAso6830wbOi8kUxqpuXWw5gHr";
};
# Git settings # Git settings
extraConfig = { extraConfig = {
@@ -15,7 +23,7 @@
}; };
core = { core = {
editor = "nvim"; editor = "micro";
autocrlf = false; autocrlf = false;
filemode = true; filemode = true;
ignorecase = false; ignorecase = false;
@@ -27,12 +35,7 @@
}; };
pull = { pull = {
rebase = true; rebase = false;
};
branch = {
autosetupmerge = "always";
autosetuprebase = "always";
}; };
merge = { merge = {
@@ -40,6 +43,10 @@
conflictstyle = "diff3"; conflictstyle = "diff3";
}; };
rebase = {
autoStash = true;
};
diff = { diff = {
tool = "delta"; tool = "delta";
}; };
@@ -78,8 +85,15 @@
}; };
# Security # Security
gpg = { gpg = lib.mkIf (!config.isServer) {
program = "${pkgs.gnupg}/bin/gpg"; format = "ssh";
ssh = {
program = "/opt/1Password/op-ssh-sign";
};
};
commit = lib.mkIf (!config.isServer) {
gpgsign = true;
}; };
# Performance # Performance

80
config/nextcloud.cfg Normal file
View File

@@ -0,0 +1,80 @@
[General]
clientVersion=3.16.0-1 (Debian built)
desktopEnterpriseChannel=daily
isVfsEnabled=false
launchOnSystemStartup=true
optionalServerNotifications=true
overrideLocalDir=
overrideServerUrl=
promptDeleteAllFiles=false
showCallNotifications=true
showChatNotifications=true
[Accounts]
0\Folders\1\ignoreHiddenFiles=false
0\Folders\1\journalPath=.sync_42a4129584d0.db
0\Folders\1\localPath=/home/menno/Nextcloud/
0\Folders\1\paused=false
0\Folders\1\targetPath=/
0\Folders\1\version=2
0\Folders\1\virtualFilesMode=off
0\Folders\2\ignoreHiddenFiles=false
0\Folders\2\journalPath=.sync_65a742b0aa83.db
0\Folders\2\localPath=/home/menno/Desktop/
0\Folders\2\paused=false
0\Folders\2\targetPath=/Desktop
0\Folders\2\version=2
0\Folders\2\virtualFilesMode=off
0\Folders\3\ignoreHiddenFiles=false
0\Folders\3\journalPath=.sync_65289e64a490.db
0\Folders\3\localPath=/home/menno/Documents/
0\Folders\3\paused=false
0\Folders\3\targetPath=/Documents
0\Folders\3\version=2
0\Folders\3\virtualFilesMode=off
0\Folders\4\ignoreHiddenFiles=false
0\Folders\4\journalPath=.sync_283a65eecb9c.db
0\Folders\4\localPath=/home/menno/Music/
0\Folders\4\paused=false
0\Folders\4\targetPath=/Music
0\Folders\4\version=2
0\Folders\4\virtualFilesMode=off
0\Folders\5\ignoreHiddenFiles=false
0\Folders\5\journalPath=.sync_884042991bd6.db
0\Folders\5\localPath=/home/menno/3D Objects/
0\Folders\5\paused=false
0\Folders\5\targetPath=/3D Objects
0\Folders\5\version=2
0\Folders\5\virtualFilesMode=off
0\Folders\6\ignoreHiddenFiles=false
0\Folders\6\journalPath=.sync_90ea5e3c7a33.db
0\Folders\6\localPath=/home/menno/Videos/
0\Folders\6\paused=false
0\Folders\6\targetPath=/Videos
0\Folders\6\version=2
0\Folders\6\virtualFilesMode=off
0\authType=webflow
0\dav_user=menno
0\displayName=Menno van Leeuwen
0\encryptionCertificateSha256Fingerprint=@ByteArray()
0\networkDownloadLimit=0
0\networkDownloadLimitSetting=-2
0\networkProxyHostName=
0\networkProxyNeedsAuth=false
0\networkProxyPort=0
0\networkProxySetting=0
0\networkProxyType=2
0\networkProxyUser=
0\networkUploadLimit=0
0\networkUploadLimitSetting=-2
0\serverColor=@Variant(\0\0\0\x43\x1\xff\xff\x1c\x1c$$<<\0\0)
0\serverHasValidSubscription=false
0\serverTextColor=@Variant(\0\0\0\x43\x1\xff\xff\xff\xff\xff\xff\xff\xff\0\0)
0\serverVersion=32.0.0.13
0\url=https://drive.mvl.sh
0\version=13
0\webflow_user=menno
version=13
[Settings]
geometry=@ByteArray(\x1\xd9\xd0\xcb\0\x3\0\0\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W\0\0\0\x1\0\0\0\0\x14\0\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W)

20
config/ssh.nix Normal file
View File

@@ -0,0 +1,20 @@
{ ... }:
{
programs.ssh = {
enable = true;
compression = true;
serverAliveInterval = 60;
serverAliveCountMax = 3;
# SSH Multiplexing - reuses existing SSH connections for multiple sessions, reducing authentication overhead and improving speed for subsequent logins.
controlPath = "~/.ssh/master-%r@%n:%p";
controlMaster = "auto";
controlPersist = "600";
# Include custom configs from 1Password (See packages/common/secrets.nix)
includes = [
"~/.ssh/config.d/*.conf"
];
};
}

View File

@@ -1,4 +1,9 @@
{ config, pkgs, lib, ... }: {
config,
pkgs,
lib,
...
}:
{ {
programs.starship = { programs.starship = {

View File

@@ -1,3 +0,0 @@
{
"enable-crash-reporter": true,
}

View File

@@ -1,86 +0,0 @@
{
"security.workspace.trust.untrustedFiles": "open",
"editor.fontFamily": "Hack Nerd Font",
"terminal.integrated.fontFamily": "Hack Nerd Font",
"github.copilot.enable": {
"*": true
},
"git.autofetch": true,
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[json]": {
"editor.defaultFormatter": "vscode.json-language-features"
},
"vsicons.dontShowNewVersionMessage": true,
"debug.internalConsoleOptions": "openOnSessionStart",
"[go]": {
"editor.tabSize": 4,
"editor.renderWhitespace": "all"
},
"[nix]": {
"editor.formatOnSave": true,
"editor.formatOnType": true
},
"[dart]": {
"editor.formatOnSave": false,
"editor.formatOnType": false,
"editor.rulers": [120],
"editor.selectionHighlight": false,
"editor.tabCompletion": "onlySnippets",
"editor.wordBasedSuggestions": "off"
},
"remote.SSH.remotePlatform": {
"mennos-laptop-w": "linux",
"mennos-desktop": "linux",
"cloud": "linux"
},
"editor.tabSize": 2,
"editor.insertSpaces": true,
"editor.detectIndentation": true,
"editor.autoIndent": "full",
"debug.inlineValues": "on",
"git.confirmSync": false,
"[dockercompose]": {
"editor.defaultFormatter": "ms-azuretools.vscode-docker"
},
"go.toolsManagement.autoUpdate": true,
"redhat.telemetry.enabled": false,
"makefile.configureOnOpen": false,
"dart.debugExternalPackageLibraries": true,
"dart.debugSdkLibraries": true,
"dart.warnWhenEditingFilesOutsideWorkspace": false,
"window.confirmSaveUntitledWorkspace": false,
"git.openRepositoryInParentFolders": "never",
"debug.toolBarLocation": "commandCenter",
"workbench.colorTheme": "Catppuccin Latte",
"ansible.lightspeed.enabled": false,
"ansible.lightspeed.suggestions.enabled": false,
"docker.extension.enableComposeLanguageServer": false,
"roo-cline.allowedCommands": [
"npm test",
"npm install",
"tsc",
"git log",
"git diff",
"git show"
],
"roo-cline.deniedCommands": [],
"kilo-code.allowedCommands": [
"npm test",
"npm install",
"tsc",
"git log",
"git diff",
"git show",
"flutter analyze",
"flutter"
"make"
],
"kilo-code.deniedCommands": [],
"github.copilot.nextEditSuggestions.enabled": true,
"workbench.iconTheme": "vscode-icons"
}

View File

@@ -1,151 +0,0 @@
// Zed settings
//
// For information on how to configure Zed, see the Zed
// documentation: https://zed.dev/docs/configuring-zed
//
// To see all of Zed's default settings without changing your
// custom settings, run `zed: open default settings` from the
// command palette (cmd-shift-p / ctrl-shift-p)
{
// #############################################
// ## Theming ##
// #############################################
"telemetry": {
"diagnostics": false,
"metrics": false
},
"ssh_connections": [
{
"host": "desktop",
"projects": [
{
"paths": [
"/home/menno/.dotfiles"
]
},
{
"paths": [
"/mnt/services/dashy"
]
}
],
"nickname": "Menno's Desktop PC"
},
{
"host": "salt.dev",
"projects": []
},
{
"host": "salt.dev",
"username": "salt",
"projects": [
{
"paths": [
"/home/salt/releases/current"
]
}
]
}
],
"icon_theme": "Catppuccin Macchiato",
"ui_font_size": 16,
"buffer_font_size": 16,
"minimap": {
"show": "always",
"thumb": "hover",
"current_line_highlight": "all",
"display_in": "active_editor"
},
"theme": {
"mode": "system",
"light": "Catppuccin Latte",
"dark": "Catppuccin Macchiato"
},
"tabs": {
"close_position": "right",
"file_icons": true,
"git_status": true,
"activate_on_close": "history",
"show_close_button": "hover",
"show_diagnostics": "errors"
},
"toolbar": {
"code_actions": true
},
// #############################################
// ## Preferences ##
// #############################################
"restore_on_startup": "last_session",
"auto_update": true,
"base_keymap": "VSCode",
"cursor_shape": "bar",
"hide_mouse": "on_typing",
"on_last_window_closed": "quit_app",
"ensure_final_newline_on_save": true,
"format_on_save": "prettier",
"tab_size": 2,
"inlay_hints": {
"enabled": true,
"show_parameter_hints": true
},
// #############################################
// ## AI Stuff ##
// #############################################
"agent": {
"play_sound_when_agent_done": false,
"default_profile": "write",
"model_parameters": [],
"default_model": {
"provider": "copilot_chat",
"model": "claude-sonnet-4"
}
},
"edit_predictions": {
"mode": "subtle",
"enabled_in_text_threads": true,
"disabled_globs": [
"**/.env*",
"**/*.pem",
"**/*.key",
"**/*.cert",
"**/*.crt",
"**/.dev.vars",
"**/secrets/**"
]
},
// #############################################
// ## Extensions ##
// #############################################
"auto_install_extensions": {
"dockerfile": true,
"html": true,
"yaml": true,
"docker-compose": true,
"golang": true
},
// #############################################
// ## Languages ##
// #############################################
"languages": {
"PHP": {
"language_servers": ["phptools"]
},
"Dart": {
"code_actions_on_format": {
"source.organizeImports": true
}
}
},
"lsp": {
"phptools": {
"initialization_options": {
"0": "<YOUR LICENSE KEY>"
}
}
}
}

12
flake.lock generated
View File

@@ -41,11 +41,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1758346548, "lastModified": 1761597516,
"narHash": "sha256-afXE7AJ7MY6wY1pg/Y6UPHNYPy5GtUKeBkrZZ/gC71E=", "narHash": "sha256-wxX7u6D2rpkJLWkZ2E932SIvDJW8+ON/0Yy8+a5vsDU=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "b2a3852bd078e68dd2b3dfa8c00c67af1f0a7d20", "rev": "daf6dc47aa4b44791372d6139ab7b25269184d55",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -77,11 +77,11 @@
"nixpkgs": "nixpkgs_2" "nixpkgs": "nixpkgs_2"
}, },
"locked": { "locked": {
"lastModified": 1751283143, "lastModified": 1761503988,
"narHash": "sha256-I3DMLT0qg5xxjS7BrmOBIK6pG+vZqOhKivEGnkDIli8=", "narHash": "sha256-MlMZXCTtPeXq/cDtJcL2XM8wCN33XOT9V2dB3PLV6f0=",
"owner": "brizzbuzz", "owner": "brizzbuzz",
"repo": "opnix", "repo": "opnix",
"rev": "1a807befe8f418da0df24c54b9633c395d840d0e", "rev": "48fdb078b5a1cd0b20b501fccf6be2d1279d6fe6",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -18,28 +18,34 @@
opnix, opnix,
}: }:
let let
supportedSystems = [ "x86_64-linux" "aarch64-linux" ]; pkgsFor =
forAllSystems = nixpkgs.lib.genAttrs supportedSystems; system:
pkgsFor = system: import nixpkgs { import nixpkgs {
inherit system; inherit system;
config.allowUnfree = true; config.allowUnfree = true;
}; };
in in
{ {
homeConfigurations = let homeConfigurations =
mkHomeConfig = system: hostname: isServer: let
home-manager.lib.homeManagerConfiguration { mkHomeConfig =
pkgs = pkgsFor system; system: hostname: isServer:
modules = [ ./home.nix ]; home-manager.lib.homeManagerConfiguration {
extraSpecialArgs = {
pkgs = pkgsFor system; pkgs = pkgsFor system;
inherit opnix isServer hostname; modules = [ ./home.nix ];
extraSpecialArgs = {
pkgs = pkgsFor system;
inherit opnix isServer hostname;
};
}; };
}; in
in { {
"mennos-vps" = mkHomeConfig "aarch64-linux" "mennos-vps" true; "mennos-vps" = mkHomeConfig "aarch64-linux" "mennos-vps" true;
"mennos-desktop" = mkHomeConfig "x86_64-linux" "mennos-desktop" false; "mennos-desktop" = mkHomeConfig "x86_64-linux" "mennos-desktop" false;
"mennos-laptop" = mkHomeConfig "x86_64-linux" "mennos-laptop" false; "mennos-server" = mkHomeConfig "x86_64-linux" "mennos-server" true;
}; "mennos-rtlsdr-pc" = mkHomeConfig "x86_64-linux" "mennos-rtlsdr-pc" true;
"mennos-laptop" = mkHomeConfig "x86_64-linux" "mennos-laptop" false;
"mennos-desktopw" = mkHomeConfig "x86_64-linux" "mennos-desktopw" true;
};
}; };
} }

View File

@@ -1,43 +1,55 @@
{ {
config, config,
lib,
isServer ? false, isServer ? false,
opnix, opnix,
... ...
}: }:
{ {
programs.home-manager.enable = true; options = {
isServer = lib.mkOption {
nixpkgs.config = { type = lib.types.bool;
allowUnfree = true; default = false;
allowUnfreePredicate = pkg: true; };
}; };
imports = imports = [
[ opnix.homeManagerModules.default
opnix.homeManagerModules.default ./config/default.nix
./config/default.nix ./packages/common/default.nix
./packages/common/default.nix ]
] ++ (
++ ( if isServer then
if isServer then [
[ ./packages/server/default.nix
./packages/server/default.nix ./server/default.nix
./server/default.nix ]
] else
else [
[ ./packages/workstation/default.nix
./packages/workstation/default.nix ./workstation/default.nix
./workstation/default.nix ]
] );
);
home = { config = {
username = "menno"; isServer = isServer;
homeDirectory = "/home/menno";
stateVersion = "25.05"; programs.home-manager.enable = true;
sessionVariables = {
PATH = "${config.home.homeDirectory}/go/bin:$PATH"; nixpkgs.config = {
allowUnfree = true;
allowUnfreePredicate = pkg: true;
};
home = {
username = "menno";
homeDirectory = "/home/menno";
stateVersion = "25.05";
sessionVariables = {
PATH = "${config.home.homeDirectory}/go/bin:$PATH";
DOTFILES_PATH = "${config.home.homeDirectory}/.dotfiles";
};
}; };
}; };
} }

View File

@@ -166,6 +166,13 @@ validate_hostname() {
return 0 return 0
} }
is_wsl() {
if grep -qEi "(Microsoft|WSL)" /proc/version &> /dev/null; then
return 0
fi
return 1
}
update_home_manager_flake() { update_home_manager_flake() {
local hostname="$1" local hostname="$1"
local isServer="$2" local isServer="$2"
@@ -290,7 +297,15 @@ prepare_hostname() {
fi fi
log_info "Setting hostname to $hostname..." log_info "Setting hostname to $hostname..."
sudo hostnamectl set-hostname "$hostname" || die "Failed to set hostname"
# WSL doesn't support hostnamectl reliably, use /etc/hostname instead
if is_wsl; then
log_info "Detected WSL environment, using alternative hostname method..."
echo "$hostname" | sudo tee /etc/hostname > /dev/null || die "Failed to set hostname"
sudo hostname "$hostname" || log_warning "Failed to set hostname for current session (will take effect on restart)"
else
sudo hostnamectl set-hostname "$hostname" || die "Failed to set hostname"
fi
echo "$hostname" > "$hostname_file" || die "Failed to save hostname" echo "$hostname" > "$hostname_file" || die "Failed to save hostname"
log_success "Hostname set successfully." log_success "Hostname set successfully."
@@ -301,7 +316,14 @@ warning_prompt() {
log_error "Please ensure you have a backup of your data before proceeding." log_error "Please ensure you have a backup of your data before proceeding."
log_error "This script will modify system files and may require sudo permissions." log_error "This script will modify system files and may require sudo permissions."
echo "" echo ""
log_info "This script has been tested on Ubuntu 22.04, 24.04, 24.10, Pop!_OS 24.04 Alpha 7, Debian 12, Fedora 41 and CachyOS."
if is_wsl; then
log_info "WSL environment detected."
log_info "This script has been tested on Ubuntu under WSL2."
else
log_info "This script has been tested on Ubuntu 22.04, 24.04, 24.10, Pop!_OS 24.04 Alpha 7, Debian 12, Fedora 41 and CachyOS."
fi
log_info "Setup starts in 10 seconds, to abort use Ctrl+C to exit NOW." log_info "Setup starts in 10 seconds, to abort use Ctrl+C to exit NOW."
echo "" echo ""
sleep 10 sleep 10
@@ -397,6 +419,11 @@ check_compatibility() {
local distro local distro
distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"') distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"')
# Special handling for WSL
if is_wsl; then
log_info "Running in WSL environment."
fi
case "$distro" in case "$distro" in
Fedora*) Fedora*)
log_success "Detected Fedora. Proceeding with setup..." log_success "Detected Fedora. Proceeding with setup..."
@@ -413,9 +440,11 @@ check_compatibility() {
;; ;;
Debian*) Debian*)
log_success "Detected Debian. Proceeding with setup..." log_success "Detected Debian. Proceeding with setup..."
log_warning "Debian has known issues with ZFS kernel modules, you might need to manually install it to make ZFS work." if ! is_wsl; then
log_warning "Continueing in 5 seconds..." log_warning "Debian has known issues with ZFS kernel modules, you might need to manually install it to make ZFS work."
sleep 5 log_warning "Continueing in 5 seconds..."
sleep 5
fi
check_command_availibility "apt" check_command_availibility "apt"
;; ;;
Pop!_OS*) Pop!_OS*)