Compare commits

...

28 Commits

Author SHA1 Message Date
fd6e7d7a86 Update flake.lock
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-30 16:22:07 +01:00
b23536ecc7 chore: adds discord and gitnuro flatpaks
Some checks failed
Ansible Lint Check / check-ansible (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
Python Lint Check / check-python (push) Has been cancelled
2025-10-30 16:22:03 +01:00
14e9c8d51c chore: remove old stuff
Some checks failed
Ansible Lint Check / check-ansible (push) Successful in 7s
Python Lint Check / check-python (push) Has been cancelled
Nix Format Check / check-format (push) Has been cancelled
2025-10-30 16:21:17 +01:00
c1c98fa007 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-28 08:36:44 +01:00
9c6e6fdf47 Add Vicinae installation and assets Ansible task
Include Vicinae setup in workstation playbook for non-WSL2 systems

Update flake.lock to newer nixpkgs revision
2025-10-28 08:36:26 +01:00
a11376fe96 Add monitoring countries to allowed_countries_codes list
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 4s
Nix Format Check / check-format (push) Successful in 41s
Python Lint Check / check-python (push) Successful in 7s
2025-10-26 00:24:17 +00:00
e14dd1d224 Add EU and trusted country lists for Caddy access control
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 54s
Python Lint Check / check-python (push) Successful in 21s
Define separate lists for EU and trusted countries in group vars. Update
Caddyfile template to support EU, trusted, and combined allow lists.
Switch Sathub domains to use combined country allow list.
2025-10-26 00:21:27 +00:00
5353981555 Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 42s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:09:31 +00:00
f9ce652dfc flake lock
Signed-off-by: Menno van Leeuwen <menno@vleeuwen.me>
2025-10-26 00:09:15 +00:00
fe9dbca2db Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 5s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 02:08:31 +02:00
987166420a Merge branch 'master' of git.mvl.sh:vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 7s
Nix Format Check / check-format (push) Successful in 43s
Python Lint Check / check-python (push) Successful in 8s
2025-10-26 00:06:13 +00:00
8ba47c2ebf Fix indentation in server.yml and add necesse service
Add become: true to JuiceFS stop/start tasks in redis.yml
2025-10-26 00:04:51 +00:00
8bfd8395f5 Add Discord environment variables and update data volumes paths 2025-10-26 00:04:41 +00:00
f0b15f77a1 Update nixpkgs input to latest commit 2025-10-26 00:04:19 +00:00
461d251356 Add Ansible role to deploy Necesse server with Docker 2025-10-26 00:04:14 +00:00
e57e9ee67c chore: update country allow list and add European allow option 2025-10-26 02:02:46 +02:00
f67b16f593 update flake locvk 2025-10-26 02:02:28 +02:00
5edd7c413e Update bash.nix to improve WSL Windows alias handling 2025-10-26 02:02:21 +02:00
cfc1188b5f Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-23 13:43:38 +02:00
e2701dcdf4 Set executable permission for equibop.desktop and update bash.nix
Add BUN_INSTALL env var and include Bun bin in PATH
2025-10-23 13:43:26 +02:00
11af7f16e5 Set formatter to prettier and update format_on_save option 2025-10-23 13:38:16 +02:00
310fb92ec9 Add WSL aliases for Windows SSH and Zed
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 51s
Python Lint Check / check-python (push) Successful in 15s
2025-10-23 04:20:15 +02:00
fb1661386b chore: add Bun install path and prepend to PATH
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 6s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 8s
2025-10-22 17:57:12 +02:00
e1b07a6edf Add WSL support and fix config formatting
All checks were successful
Ansible Lint Check / check-ansible (push) Successful in 1m17s
Nix Format Check / check-format (push) Successful in 44s
Python Lint Check / check-python (push) Successful in 9s
2025-10-22 16:18:08 +02:00
f6a3f6d379 Merge branch 'master' of ssh://git.mvl.sh/vleeuwenmenno/dotfiles 2025-10-21 10:06:20 +02:00
77424506d6 Update Nextcloud config and flake.lock dependencies
Some checks failed
Ansible Lint Check / check-ansible (push) Failing after 0s
Nix Format Check / check-format (push) Failing after 0s
Python Lint Check / check-python (push) Failing after 0s
2025-10-20 11:27:21 +02:00
1856b2fb9e adds fastmail app as flatpak 2025-10-20 11:27:00 +02:00
436deb267e Add smart alias configuration for rtlsdr 2025-10-08 13:01:37 +02:00
26 changed files with 869 additions and 290 deletions

View File

@@ -41,15 +41,6 @@ Run the `dotf update` command, although the setup script did most of the work so
dotf update dotf update
``` ```
### 5. Decrypt secrets
Either using 1Password or by manualling providing the decryption key you should decrypt the secrets.
Various configurations depend on the secrets to be decrypted such as the SSH keys, yubikey pam configuration and more.
```bash
dotf secrets decrypt
```
### 6. Profit ### 6. Profit
You should now have a fully setup system with all the configurations applied. You should now have a fully setup system with all the configurations applied.

View File

@@ -2,30 +2,81 @@
flatpaks: false flatpaks: false
install_ui_apps: false install_ui_apps: false
# European countries for EU-specific access control
eu_countries_codes:
- AL # Albania
- AD # Andorra
- AM # Armenia
- AT # Austria
- AZ # Azerbaijan
# - BY # Belarus (Belarus is disabled due to geopolitical reasons)
- BE # Belgium
- BA # Bosnia and Herzegovina
- BG # Bulgaria
- HR # Croatia
- CY # Cyprus
- CZ # Czech Republic
- DK # Denmark
- EE # Estonia
- FI # Finland
- FR # France
- GE # Georgia
- DE # Germany
- GR # Greece
- HU # Hungary
- IS # Iceland
- IE # Ireland
- IT # Italy
- XK # Kosovo
- LV # Latvia
- LI # Liechtenstein
- LT # Lithuania
- LU # Luxembourg
- MK # North Macedonia
- MT # Malta
- MD # Moldova
- MC # Monaco
- ME # Montenegro
- NL # Netherlands
- NO # Norway
- PL # Poland
- PT # Portugal
- RO # Romania
# - RU # Russia (Russia is disabled due to geopolitical reasons)
- SM # San Marino
- RS # Serbia
- SK # Slovakia
- SI # Slovenia
- ES # Spain
- SE # Sweden
- CH # Switzerland
- TR # Turkey
- UA # Ukraine
- GB # United Kingdom
- VA # Vatican City
# Trusted non-EU countries for extended access control
trusted_countries_codes:
- US # United States
- AU # Australia
- NZ # New Zealand
- JP # Japan
# Countries that are allowed to access the server Caddy reverse proxy # Countries that are allowed to access the server Caddy reverse proxy
allowed_countries_codes: allowed_countries_codes:
- US # United States - US # United States
- CA # Canada - GB # United Kingdom
- GB # United Kingdom - DE # Germany
- DE # Germany - FR # France
- FR # France - IT # Italy
- ES # Spain - NL # Netherlands
- IT # Italy - JP # Japan
- NL # Netherlands - KR # South Korea
- AU # Australia - CH # Switzerland
- NZ # New Zealand - AU # Australia (Added for UpDown.io to monitor server uptime)
- JP # Japan - CA # Canada (Added for UpDown.io to monitor server uptime)
- KR # South Korea - FI # Finland (Added for UpDown.io to monitor server uptime)
- SK # Slovakia - SG # Singapore (Added for UpDown.io to monitor server uptime)
- FI # Finland
- DK # Denmark
- SG # Singapore
- AT # Austria
- CH # Switzerland
# IP ranges for blocked countries (generated automatically)
# This will be populated by the country blocking script
blocked_countries: []
# Enable/disable country blocking globally # Enable/disable country blocking globally
enable_country_blocking: true enable_country_blocking: true

View File

@@ -5,4 +5,7 @@ mennos-desktop ansible_connection=local
[servers] [servers]
mennos-vps ansible_connection=local mennos-vps ansible_connection=local
mennos-server ansible_connection=local mennos-server ansible_connection=local
mennos-rtlsdr-pc ansible_connection=local mennos-rtlsdr-pc ansible_connection=local
[wsl]
mennos-desktopw ansible_connection=local

View File

@@ -2,18 +2,18 @@
- name: Configure all hosts - name: Configure all hosts
hosts: all hosts: all
handlers: handlers:
- name: Import handler tasks - name: Import handler tasks
ansible.builtin.import_tasks: handlers/main.yml ansible.builtin.import_tasks: handlers/main.yml
gather_facts: true gather_facts: true
tasks: tasks:
- name: Include global tasks - name: Include global tasks
ansible.builtin.import_tasks: tasks/global/global.yml ansible.builtin.import_tasks: tasks/global/global.yml
- name: Include workstation tasks - name: Include workstation tasks
ansible.builtin.import_tasks: tasks/workstations/workstation.yml ansible.builtin.import_tasks: tasks/workstations/workstation.yml
when: inventory_hostname in ['mennos-laptop', 'mennos-desktop'] when: inventory_hostname in ['mennos-laptop', 'mennos-desktop']
- name: Include server tasks - name: Include server tasks
ansible.builtin.import_tasks: tasks/servers/server.yml ansible.builtin.import_tasks: tasks/servers/server.yml
when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc'] when: inventory_hostname in ['mennos-vps', 'mennos-server', 'mennos-rtlsdr-pc', 'mennos-desktopw']

View File

@@ -28,6 +28,12 @@ smart_aliases:
check_host: "192.168.1.253" check_host: "192.168.1.253"
timeout: "2s" timeout: "2s"
rtlsdr:
primary: "rtlsdr-local"
fallback: "rtlsdr"
check_host: "192.168.1.252"
timeout: "2s"
# Background SSH Tunnel Definitions # Background SSH Tunnel Definitions
tunnels: tunnels:
# Example: Desktop database tunnel # Example: Desktop database tunnel

View File

@@ -30,10 +30,10 @@ type LoggingConfig struct {
// SmartAlias represents a smart SSH alias configuration // SmartAlias represents a smart SSH alias configuration
type SmartAlias struct { type SmartAlias struct {
Primary string `yaml:"primary"` // SSH config host to use when local Primary string `yaml:"primary"` // SSH config host to use when local
Fallback string `yaml:"fallback"` // SSH config host to use when remote Fallback string `yaml:"fallback"` // SSH config host to use when remote
CheckHost string `yaml:"check_host"` // IP to ping for connectivity test CheckHost string `yaml:"check_host"` // IP to ping for connectivity test
Timeout string `yaml:"timeout"` // Ping timeout (default: "2s") Timeout string `yaml:"timeout"` // Ping timeout (default: "2s")
} }
// TunnelDefinition represents a tunnel configuration // TunnelDefinition represents a tunnel configuration
@@ -47,36 +47,39 @@ type TunnelDefinition struct {
// TunnelState represents runtime state of an active tunnel // TunnelState represents runtime state of an active tunnel
type TunnelState struct { type TunnelState struct {
Name string `json:"name"` Name string `json:"name"`
Source string `json:"source"` // "config" or "adhoc" Source string `json:"source"` // "config" or "adhoc"
Type string `json:"type"` // local, remote, dynamic Type string `json:"type"` // local, remote, dynamic
LocalPort int `json:"local_port"` LocalPort int `json:"local_port"`
RemoteHost string `json:"remote_host"` RemoteHost string `json:"remote_host"`
RemotePort int `json:"remote_port"` RemotePort int `json:"remote_port"`
SSHHost string `json:"ssh_host"` SSHHost string `json:"ssh_host"`
SSHHostResolved string `json:"ssh_host_resolved"` // After smart alias resolution SSHHostResolved string `json:"ssh_host_resolved"` // After smart alias resolution
PID int `json:"pid"` PID int `json:"pid"`
Status string `json:"status"` Status string `json:"status"`
StartedAt time.Time `json:"started_at"` StartedAt time.Time `json:"started_at"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
CommandLine string `json:"command_line"` CommandLine string `json:"command_line"`
} }
// Config represents the YAML configuration structure // Config represents the YAML configuration structure
type Config struct { type Config struct {
Logging LoggingConfig `yaml:"logging"` Logging LoggingConfig `yaml:"logging"`
SmartAliases map[string]SmartAlias `yaml:"smart_aliases"` SmartAliases map[string]SmartAlias `yaml:"smart_aliases"`
Tunnels map[string]TunnelDefinition `yaml:"tunnels"` Tunnels map[string]TunnelDefinition `yaml:"tunnels"`
} }
const ( const (
realSSHPath = "/usr/bin/ssh" defaultSSHPath = "/usr/bin/ssh"
wslSSHPath = "ssh.exe"
wslDetectPath = "/mnt/c/Windows/System32/cmd.exe"
) )
var ( var (
configDir string configDir string
tunnelsDir string tunnelsDir string
config *Config config *Config
sshPath string // Will be set based on WSL2 detection
// Global flags // Global flags
tunnelMode bool tunnelMode bool
@@ -92,10 +95,10 @@ var (
) )
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "ssh", Use: "ssh",
Short: "Smart SSH utility with tunnel management", Short: "Smart SSH utility with tunnel management",
Long: "A transparent SSH wrapper that provides smart alias resolution and background tunnel management", Long: "A transparent SSH wrapper that provides smart alias resolution and background tunnel management",
Run: handleSSH, Run: handleSSH,
DisableFlagParsing: true, DisableFlagParsing: true,
} }
@@ -103,13 +106,16 @@ var tunnelCmd = &cobra.Command{
Use: "tunnel [tunnel-name]", Use: "tunnel [tunnel-name]",
Short: "Manage background SSH tunnels", Short: "Manage background SSH tunnels",
Long: "Create, list, and manage persistent SSH tunnels in the background", Long: "Create, list, and manage persistent SSH tunnels in the background",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
handleTunnelManual(append([]string{"--tunnel"}, args...)) handleTunnelManual(append([]string{"--tunnel"}, args...))
}, },
Args: cobra.MaximumNArgs(1), Args: cobra.MaximumNArgs(1),
} }
func init() { func init() {
// Detect and set SSH path based on environment (WSL2 vs native Linux)
sshPath = detectSSHPath()
// Initialize config directory // Initialize config directory
homeDir, err := os.UserHomeDir() homeDir, err := os.UserHomeDir()
if err != nil { if err != nil {
@@ -141,6 +147,13 @@ func init() {
// Initialize logging // Initialize logging
initLogging(config.Logging) initLogging(config.Logging)
// Log SSH path detection (after logging is initialized)
if isWSL2() {
log.Debug().Str("ssh_path", sshPath).Msg("WSL2 detected, using Windows SSH")
} else {
log.Debug().Str("ssh_path", sshPath).Msg("Native Linux environment, using Linux SSH")
}
// Global flags // Global flags
rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode") rootCmd.PersistentFlags().BoolVarP(&tunnelMode, "tunnel", "T", false, "Enable tunnel mode")
rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel") rootCmd.Flags().BoolVarP(&tunnelOpen, "open", "O", false, "Open a tunnel")
@@ -169,6 +182,22 @@ func init() {
} }
} }
// detectSSHPath determines the correct SSH binary path based on the environment
func detectSSHPath() string {
if isWSL2() {
// In WSL2, use Windows SSH
return wslSSHPath
}
// Default to Linux SSH
return defaultSSHPath
}
// isWSL2 checks if we're running in WSL2 by looking for Windows System32
func isWSL2() bool {
_, err := os.Stat(wslDetectPath)
return err == nil
}
func main() { func main() {
// Check if this is a tunnel command first // Check if this is a tunnel command first
args := os.Args[1:] args := os.Args[1:]
@@ -563,7 +592,7 @@ func openTunnel(name string) error {
log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel") log.Debug().Strs("command", cmdArgs).Msg("Starting SSH tunnel")
// Start SSH process // Start SSH process
cmd := exec.Command(realSSHPath, cmdArgs[1:]...) cmd := exec.Command(sshPath, cmdArgs[1:]...)
// Capture stderr to see any SSH errors // Capture stderr to see any SSH errors
var stderr bytes.Buffer var stderr bytes.Buffer
@@ -708,7 +737,9 @@ func createAdhocTunnel() (TunnelDefinition, error) {
} }
func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string { func buildSSHCommand(tunnel TunnelDefinition, sshHost string) []string {
args := []string{"ssh", "-f", "-N"} // Use the detected SSH path basename for the command
sshBinary := filepath.Base(sshPath)
args := []string{sshBinary, "-f", "-N"}
switch tunnel.Type { switch tunnel.Type {
case "local": case "local":
@@ -1056,18 +1087,37 @@ func findSSHProcessByPort(port int) int {
// executeRealSSH executes the real SSH binary with given arguments // executeRealSSH executes the real SSH binary with given arguments
func executeRealSSH(args []string) { func executeRealSSH(args []string) {
// Check if real SSH exists log.Debug().Str("ssh_path", sshPath).Strs("args", args).Msg("Executing real SSH")
if _, err := os.Stat(realSSHPath); os.IsNotExist(err) {
log.Error().Str("path", realSSHPath).Msg("Real SSH binary not found") // In WSL2, we need to use exec.Command instead of syscall.Exec for Windows binaries
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", realSSHPath) if isWSL2() {
cmd := exec.Command(sshPath, args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
if err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
os.Exit(exitErr.ExitCode())
}
log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)
os.Exit(1)
}
os.Exit(0)
}
// For native Linux, check if SSH exists
if _, err := os.Stat(sshPath); os.IsNotExist(err) {
log.Error().Str("path", sshPath).Msg("Real SSH binary not found")
fmt.Fprintf(os.Stderr, "Error: Real SSH binary not found at %s\n", sshPath)
os.Exit(1) os.Exit(1)
} }
log.Debug().Str("ssh_path", realSSHPath).Strs("args", args).Msg("Executing real SSH") // Execute the real SSH binary using syscall.Exec (Linux only)
// This replaces the current process (like exec in shell)
// Execute the real SSH binary err := syscall.Exec(sshPath, append([]string{"ssh"}, args...), os.Environ())
// Using syscall.Exec to replace current process (like exec in shell)
err := syscall.Exec(realSSHPath, append([]string{"ssh"}, args...), os.Environ())
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to execute SSH") log.Error().Err(err).Msg("Failed to execute SSH")
fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err) fmt.Fprintf(os.Stderr, "Error executing SSH: %v\n", err)

View File

@@ -1,161 +1,165 @@
--- ---
- name: Server setup - name: Server setup
block: block:
- name: Ensure openssh-server is installed on Arch-based systems - name: Ensure openssh-server is installed on Arch-based systems
ansible.builtin.package: ansible.builtin.package:
name: openssh name: openssh
state: present state: present
when: ansible_pkg_mgr == 'pacman' when: ansible_pkg_mgr == 'pacman'
- name: Ensure openssh-server is installed on non-Arch systems - name: Ensure openssh-server is installed on non-Arch systems
ansible.builtin.package: ansible.builtin.package:
name: openssh-server name: openssh-server
state: present state: present
when: ansible_pkg_mgr != 'pacman' when: ansible_pkg_mgr != 'pacman'
- name: Ensure Borg is installed on Arch-based systems - name: Ensure Borg is installed on Arch-based systems
ansible.builtin.package: ansible.builtin.package:
name: borg name: borg
state: present state: present
become: true become: true
when: ansible_pkg_mgr == 'pacman' when: ansible_pkg_mgr == 'pacman'
- name: Ensure Borg is installed on Debian/Ubuntu systems - name: Ensure Borg is installed on Debian/Ubuntu systems
ansible.builtin.package: ansible.builtin.package:
name: borgbackup name: borgbackup
state: present state: present
become: true become: true
when: ansible_pkg_mgr != 'pacman' when: ansible_pkg_mgr != 'pacman'
- name: Include JuiceFS tasks - name: Include JuiceFS tasks
ansible.builtin.include_tasks: juicefs.yml ansible.builtin.include_tasks: juicefs.yml
tags: tags:
- juicefs - juicefs
- name: Include Dynamic DNS tasks - name: Include Dynamic DNS tasks
ansible.builtin.include_tasks: dynamic-dns.yml ansible.builtin.include_tasks: dynamic-dns.yml
tags: tags:
- dynamic-dns - dynamic-dns
- name: Include Borg Backup tasks - name: Include Borg Backup tasks
ansible.builtin.include_tasks: borg-backup.yml ansible.builtin.include_tasks: borg-backup.yml
tags: tags:
- borg-backup - borg-backup
- name: Include Borg Local Sync tasks - name: Include Borg Local Sync tasks
ansible.builtin.include_tasks: borg-local-sync.yml ansible.builtin.include_tasks: borg-local-sync.yml
tags: tags:
- borg-local-sync - borg-local-sync
- name: System performance optimizations - name: System performance optimizations
ansible.posix.sysctl: ansible.posix.sysctl:
name: "{{ item.name }}" name: "{{ item.name }}"
value: "{{ item.value }}" value: "{{ item.value }}"
state: present state: present
reload: true reload: true
become: true become: true
loop: loop:
- { name: "fs.file-max", value: "2097152" } # Max open files for the entire system - { name: "fs.file-max", value: "2097152" } # Max open files for the entire system
- { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have - { name: "vm.max_map_count", value: "16777216" } # Max memory map areas a process can have
- { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory - { name: "vm.swappiness", value: "10" } # Controls how aggressively the kernel swaps out memory
- { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches - { name: "vm.vfs_cache_pressure", value: "50" } # Controls kernel's tendency to reclaim memory for directory/inode caches
- { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket - { name: "net.core.somaxconn", value: "65535" } # Max pending connections for a listening socket
- { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input - { name: "net.core.netdev_max_backlog", value: "65535" } # Max packets queued on network interface input
- { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state - { name: "net.ipv4.tcp_fin_timeout", value: "30" } # How long sockets stay in FIN-WAIT-2 state
- { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections - { name: "net.ipv4.tcp_tw_reuse", value: "1" } # Allows reusing TIME_WAIT sockets for new outgoing connections
- name: Include service tasks - name: Include service tasks
ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml" ansible.builtin.include_tasks: "services/{{ item.name }}/{{ item.name }}.yml"
loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}" loop: "{{ services | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list if specific_service is not defined else services | selectattr('name', 'equalto', specific_service) | selectattr('enabled', 'equalto', true) | selectattr('hosts', 'contains', inventory_hostname) | list }}"
loop_control: loop_control:
label: "{{ item.name }}" label: "{{ item.name }}"
tags: tags:
- services - services
- always - always
vars: vars:
services: services:
- name: dashy - name: dashy
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: gitea - name: gitea
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: factorio - name: factorio
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: dozzle - name: dozzle
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: beszel - name: beszel
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: caddy - name: caddy
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: golink - name: golink
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: immich - name: immich
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: plex - name: plex
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: tautulli - name: tautulli
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: downloaders - name: downloaders
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: wireguard - name: wireguard
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: nextcloud - name: nextcloud
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: cloudreve - name: cloudreve
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: echoip - name: echoip
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: arr-stack - name: arr-stack
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: home-assistant - name: home-assistant
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: privatebin - name: privatebin
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: unifi-network-application - name: unifi-network-application
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: avorion - name: avorion
enabled: false enabled: false
hosts: hosts:
- mennos-server - mennos-server
- name: sathub - name: sathub
enabled: true enabled: true
hosts: hosts:
- mennos-server - mennos-server
- name: necesse
enabled: true
hosts:
- mennos-server

View File

@@ -5,9 +5,9 @@
} }
} }
# Country blocking snippet using MaxMind GeoLocation - reusable across all sites # Country allow list snippet using MaxMind GeoLocation - reusable across all sites
{% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %} {% if enable_country_blocking | default(false) and allowed_countries_codes | default([]) | length > 0 %}
(country_block) { (country_allow) {
@allowed_local { @allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1 remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
} }
@@ -23,56 +23,125 @@
respond @not_allowed_countries "Access denied" 403 respond @not_allowed_countries "Access denied" 403
} }
{% else %} {% else %}
(country_block) { (country_allow) {
# Country blocking disabled # Country allow list disabled
}
{% endif %}
# European country allow list - allows all European countries only
{% if eu_countries_codes | default([]) | length > 0 %}
(eu_country_allow) {
@eu_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@eu_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ eu_countries_codes | join(' ') }}
}
}
}
respond @eu_not_allowed_countries "Access denied" 403
}
{% else %}
(eu_country_allow) {
# EU country allow list disabled
}
{% endif %}
# Trusted country allow list - allows US, Australia, New Zealand, and Japan
{% if trusted_countries_codes | default([]) | length > 0 %}
(trusted_country_allow) {
@trusted_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@trusted_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ trusted_countries_codes | join(' ') }}
}
}
}
respond @trusted_not_allowed_countries "Access denied" 403
}
{% else %}
(trusted_country_allow) {
# Trusted country allow list disabled
}
{% endif %}
# Sathub country allow list - combines EU and trusted countries
{% if eu_countries_codes | default([]) | length > 0 and trusted_countries_codes | default([]) | length > 0 %}
(sathub_country_allow) {
@sathub_allowed_local {
remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
}
@sathub_not_allowed_countries {
not remote_ip 127.0.0.1 ::1 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 157.180.41.167 2a01:4f9:c013:1a13::1
not {
maxmind_geolocation {
db_path "/etc/caddy/geoip/GeoLite2-Country.mmdb"
allow_countries {{ (eu_countries_codes + trusted_countries_codes) | join(' ') }}
}
}
}
respond @sathub_not_allowed_countries "Access denied" 403
}
{% else %}
(sathub_country_allow) {
# Sathub country allow list disabled
} }
{% endif %} {% endif %}
{% if inventory_hostname == 'mennos-server' %} {% if inventory_hostname == 'mennos-server' %}
git.mvl.sh { git.mvl.sh {
import country_block import country_allow
reverse_proxy gitea:3000 reverse_proxy gitea:3000
tls {{ caddy_email }} tls {{ caddy_email }}
} }
git.vleeuwen.me { git.vleeuwen.me {
import country_block import country_allow
redir https://git.mvl.sh{uri} redir https://git.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
df.mvl.sh { df.mvl.sh {
import country_block import country_allow
redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh redir / https://git.mvl.sh/vleeuwenmenno/dotfiles/raw/branch/master/setup.sh
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.mvl.sh { fsm.mvl.sh {
import country_block import country_allow
reverse_proxy factorio-server-manager:80 reverse_proxy factorio-server-manager:80
tls {{ caddy_email }} tls {{ caddy_email }}
} }
fsm.vleeuwen.me { fsm.vleeuwen.me {
import country_block import country_allow
redir https://fsm.mvl.sh{uri} redir https://fsm.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.mvl.sh { beszel.mvl.sh {
import country_block import country_allow
reverse_proxy beszel:8090 reverse_proxy beszel:8090
tls {{ caddy_email }} tls {{ caddy_email }}
} }
beszel.vleeuwen.me { beszel.vleeuwen.me {
import country_block import country_allow
redir https://beszel.mvl.sh{uri} redir https://beszel.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
sathub.de { sathub.de {
import country_block import sathub_country_allow
handle { handle {
reverse_proxy sathub-frontend:4173 reverse_proxy sathub-frontend:4173
@@ -93,31 +162,31 @@ sathub.de {
} }
api.sathub.de { api.sathub.de {
import country_block import sathub_country_allow
reverse_proxy sathub-backend:4001 reverse_proxy sathub-backend:4001
tls {{ caddy_email }} tls {{ caddy_email }}
} }
sathub.nl { sathub.nl {
import country_block import sathub_country_allow
redir https://sathub.de{uri} redir https://sathub.de{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
photos.mvl.sh { photos.mvl.sh {
import country_block import country_allow
reverse_proxy immich:2283 reverse_proxy immich:2283
tls {{ caddy_email }} tls {{ caddy_email }}
} }
photos.vleeuwen.me { photos.vleeuwen.me {
import country_block import country_allow
redir https://photos.mvl.sh{uri} redir https://photos.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
home.mvl.sh { home.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -126,7 +195,7 @@ home.mvl.sh {
} }
home.vleeuwen.me { home.vleeuwen.me {
import country_block import country_allow
reverse_proxy host.docker.internal:8123 { reverse_proxy host.docker.internal:8123 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -160,13 +229,13 @@ hotspot.mvl.sh:80 {
} }
bin.mvl.sh { bin.mvl.sh {
import country_block import country_allow
reverse_proxy privatebin:8080 reverse_proxy privatebin:8080
tls {{ caddy_email }} tls {{ caddy_email }}
} }
ip.mvl.sh ip.vleeuwen.me { ip.mvl.sh ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
@@ -174,26 +243,26 @@ ip.mvl.sh ip.vleeuwen.me {
} }
http://ip.mvl.sh http://ip.vleeuwen.me { http://ip.mvl.sh http://ip.vleeuwen.me {
import country_block import country_allow
reverse_proxy echoip:8080 { reverse_proxy echoip:8080 {
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
} }
} }
overseerr.mvl.sh { overseerr.mvl.sh {
import country_block import country_allow
reverse_proxy overseerr:5055 reverse_proxy overseerr:5055
tls {{ caddy_email }} tls {{ caddy_email }}
} }
overseerr.vleeuwen.me { overseerr.vleeuwen.me {
import country_block import country_allow
redir https://overseerr.mvl.sh{uri} redir https://overseerr.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
plex.mvl.sh { plex.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:32400 { reverse_proxy host.docker.internal:32400 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -202,13 +271,13 @@ plex.mvl.sh {
} }
plex.vleeuwen.me { plex.vleeuwen.me {
import country_block import country_allow
redir https://plex.mvl.sh{uri} redir https://plex.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
tautulli.mvl.sh { tautulli.mvl.sh {
import country_block import country_allow
reverse_proxy host.docker.internal:8181 { reverse_proxy host.docker.internal:8181 {
header_up Host {upstream_hostport} header_up Host {upstream_hostport}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -217,13 +286,13 @@ tautulli.mvl.sh {
} }
tautulli.vleeuwen.me { tautulli.vleeuwen.me {
import country_block import country_allow
redir https://tautulli.mvl.sh{uri} redir https://tautulli.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
cloud.mvl.sh { cloud.mvl.sh {
import country_block import country_allow
reverse_proxy cloudreve:5212 { reverse_proxy cloudreve:5212 {
header_up Host {host} header_up Host {host}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -232,13 +301,13 @@ cloud.mvl.sh {
} }
cloud.vleeuwen.me { cloud.vleeuwen.me {
import country_block import country_allow
redir https://cloud.mvl.sh{uri} redir https://cloud.mvl.sh{uri}
tls {{ caddy_email }} tls {{ caddy_email }}
} }
collabora.mvl.sh { collabora.mvl.sh {
import country_block import country_allow
reverse_proxy collabora:9980 { reverse_proxy collabora:9980 {
header_up Host {host} header_up Host {host}
header_up X-Real-IP {http.request.remote.host} header_up X-Real-IP {http.request.remote.host}
@@ -247,7 +316,7 @@ collabora.mvl.sh {
} }
drive.mvl.sh drive.vleeuwen.me { drive.mvl.sh drive.vleeuwen.me {
import country_block import country_allow
# CalDAV and CardDAV redirects # CalDAV and CardDAV redirects
redir /.well-known/carddav /remote.php/dav/ 301 redir /.well-known/carddav /remote.php/dav/ 301

View File

@@ -0,0 +1,15 @@
services:
necesse:
image: brammys/necesse-server
container_name: necesse
restart: unless-stopped
ports:
- "14159:14159/udp"
environment:
- MOTD=StarDebris' Server!
- PASSWORD=2142
- SLOTS=4
- PAUSE=1
volumes:
- {{ necesse_data_dir }}/saves:/necesse/saves
- {{ necesse_data_dir }}/logs:/necesse/logs

View File

@@ -0,0 +1,41 @@
---
- name: Deploy Necesse service
block:
- name: Set Necesse directories
ansible.builtin.set_fact:
necesse_service_dir: "{{ ansible_env.HOME }}/.services/necesse"
necesse_data_dir: "/mnt/services/necesse"
- name: Create Necesse service directory
ansible.builtin.file:
path: "{{ necesse_service_dir }}"
state: directory
mode: "0755"
- name: Create Necesse data directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ necesse_data_dir }}"
- "{{ necesse_data_dir }}/saves"
- "{{ necesse_data_dir }}/logs"
- name: Deploy Necesse docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ necesse_service_dir }}/docker-compose.yml"
mode: "0644"
register: necesse_compose
- name: Stop Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" down --remove-orphans
when: necesse_compose.changed
- name: Start Necesse service
ansible.builtin.command: docker compose -f "{{ necesse_service_dir }}/docker-compose.yml" up -d
when: necesse_compose.changed
tags:
- services
- necesse

View File

@@ -34,6 +34,7 @@
register: juicefs_stop register: juicefs_stop
changed_when: juicefs_stop.changed changed_when: juicefs_stop.changed
when: redis_compose.changed and juicefs_service_stat.stat.exists when: redis_compose.changed and juicefs_service_stat.stat.exists
become: true
- name: List containers that are running - name: List containers that are running
ansible.builtin.command: docker ps -q ansible.builtin.command: docker ps -q
@@ -68,6 +69,7 @@
register: juicefs_start register: juicefs_start
changed_when: juicefs_start.changed changed_when: juicefs_start.changed
when: juicefs_service_stat.stat.exists when: juicefs_service_stat.stat.exists
become: true
- name: Restart containers that were stopped - name: Restart containers that were stopped
ansible.builtin.command: docker start {{ item }} ansible.builtin.command: docker start {{ item }}

View File

@@ -45,3 +45,9 @@ CORS_ALLOWED_ORIGINS=https://sathub.de,https://sathub.nl,https://api.sathub.de
# Frontend configuration (optional - defaults are provided) # Frontend configuration (optional - defaults are provided)
VITE_API_BASE_URL=https://api.sathub.de VITE_API_BASE_URL=https://api.sathub.de
VITE_ALLOWED_HOSTS=sathub.de,sathub.nl VITE_ALLOWED_HOSTS=sathub.de,sathub.nl
# Discord related messsaging
DISCORD_CLIENT_ID={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_ID') }}
DISCORD_CLIENT_SECRET={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_CLIENT_SECRET') }}
DISCORD_REDIRECT_URI={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_REDIRECT_URL') }}
DISCORD_WEBHOOK_URL={{ lookup('community.general.onepassword', 'sathub', vault='Dotfiles', field='DISCORD_WEBHOOK_URL') }}

View File

@@ -62,6 +62,12 @@ services:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY} - MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY} - MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de - MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks: networks:
- sathub - sathub
- caddy_network - caddy_network
@@ -98,6 +104,12 @@ services:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY} - MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY} - MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
- MINIO_EXTERNAL_URL=https://obj.sathub.de - MINIO_EXTERNAL_URL=https://obj.sathub.de
# Discord settings
- DISCORD_CLIENT_ID=${DISCORD_CLIENT_ID}
- DISCORD_CLIENT_SECRET=${DISCORD_CLIENT_SECRET}
- DISCORD_REDIRECT_URI=${DISCORD_REDIRECT_URI}
- DISCORD_WEBHOOK_URL=${DISCORD_WEBHOOK_URL}
networks: networks:
- sathub - sathub
depends_on: depends_on:
@@ -113,7 +125,7 @@ services:
- POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-sathub} - POSTGRES_DB=${DB_NAME:-sathub}
volumes: volumes:
- postgres_data:/var/lib/postgresql/data - {{ sathub_data_dir }}/postgres_data:/var/lib/postgresql/data
networks: networks:
- sathub - sathub
@@ -136,7 +148,7 @@ services:
- MINIO_ROOT_USER=${MINIO_ROOT_USER} - MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD} - MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
volumes: volumes:
- minio_data:/data - {{ sathub_data_dir }}/minio_data:/data
command: server /data --console-address :9001 command: server /data --console-address :9001
networks: networks:
- sathub - sathub
@@ -158,12 +170,6 @@ services:
networks: networks:
- sathub - sathub
volumes:
minio_data:
driver: local
postgres_data:
driver: local
networks: networks:
sathub: sathub:
driver: bridge driver: bridge

View File

@@ -41,18 +41,20 @@
# Multimedia # Multimedia
- com.plexamp.Plexamp - com.plexamp.Plexamp
- tv.plex.PlexDesktop - tv.plex.PlexDesktop
- com.spotify.Client
# Messaging # Messaging
- com.rtosta.zapzap - com.rtosta.zapzap
- org.telegram.desktop - org.telegram.desktop
- org.signal.Signal - org.signal.Signal
- com.spotify.Client - com.discordapp.Discord
# 3D Printing # 3D Printing
- com.bambulab.BambuStudio - com.bambulab.BambuStudio
- io.mango3d.LycheeSlicer - io.mango3d.LycheeSlicer
# Utilities # Utilities
- com.fastmail.Fastmail
- com.ranfdev.DistroShelf - com.ranfdev.DistroShelf
- io.missioncenter.MissionCenter - io.missioncenter.MissionCenter
- io.gitlab.elescoute.spacelaunch - io.gitlab.elescoute.spacelaunch
@@ -73,6 +75,7 @@
- io.github.bytezz.IPLookup - io.github.bytezz.IPLookup
- org.gaphor.Gaphor - org.gaphor.Gaphor
- io.dbeaver.DBeaverCommunity - io.dbeaver.DBeaverCommunity
- com.jetpackduba.Gitnuro
- name: Define system desired Flatpak remotes - name: Define system desired Flatpak remotes
ansible.builtin.set_fact: ansible.builtin.set_fact:

View File

@@ -0,0 +1,175 @@
---
- name: Install Vicinae
block:
- name: Set Vicinae version
ansible.builtin.set_fact:
vicinae_version: "v0.15.6"
vicinae_appimage_commit: "13865b4c5"
- name: Set architecture-specific variables
ansible.builtin.set_fact:
vicinae_arch: "{{ 'x86_64' if ansible_architecture == 'x86_64' else ansible_architecture }}"
- name: Ensure /opt/vicinae directory exists
ansible.builtin.file:
path: "/opt/vicinae"
state: directory
mode: "0755"
become: true
- name: Download Vicinae AppImage
ansible.builtin.get_url:
url: "https://github.com/vicinaehq/vicinae/releases/download/{{ vicinae_version }}/Vicinae-{{ vicinae_appimage_commit }}-{{ vicinae_arch }}.AppImage"
dest: "/opt/vicinae/vicinae.AppImage"
mode: "0755"
become: true
- name: Remove old Vicinae binary if exists
ansible.builtin.file:
path: "/usr/local/bin/vicinae"
state: absent
become: true
- name: Create symlink to Vicinae AppImage
ansible.builtin.file:
src: "/opt/vicinae/vicinae.AppImage"
dest: "/usr/local/bin/vicinae"
state: link
become: true
- name: Create temporary directory for Vicinae assets download
ansible.builtin.tempfile:
state: directory
suffix: vicinae
register: vicinae_temp_dir
- name: Download Vicinae tarball for assets
ansible.builtin.get_url:
url: "https://github.com/vicinaehq/vicinae/releases/download/{{ vicinae_version }}/vicinae-linux-{{ vicinae_arch }}-{{ vicinae_version }}.tar.gz"
dest: "{{ vicinae_temp_dir.path }}/vicinae.tar.gz"
mode: "0644"
- name: Extract Vicinae tarball
ansible.builtin.unarchive:
src: "{{ vicinae_temp_dir.path }}/vicinae.tar.gz"
dest: "{{ vicinae_temp_dir.path }}"
remote_src: true
- name: Ensure systemd user directory exists
ansible.builtin.file:
path: "/usr/lib/systemd/user"
state: directory
mode: "0755"
become: true
- name: Copy systemd user service
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/lib/systemd/user/vicinae.service"
dest: "/usr/lib/systemd/user/vicinae.service"
mode: "0644"
remote_src: true
become: true
- name: Update systemd service to use AppImage
ansible.builtin.replace:
path: "/usr/lib/systemd/user/vicinae.service"
regexp: "ExecStart=.*"
replace: "ExecStart=/usr/local/bin/vicinae"
become: true
- name: Ensure applications directory exists
ansible.builtin.file:
path: "/usr/share/applications"
state: directory
mode: "0755"
become: true
- name: Copy desktop files
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/applications/{{ item }}"
dest: "/usr/share/applications/{{ item }}"
mode: "0644"
remote_src: true
become: true
loop:
- vicinae.desktop
- vicinae-url-handler.desktop
- name: Update desktop files to use AppImage
ansible.builtin.replace:
path: "/usr/share/applications/{{ item }}"
regexp: "Exec=.*vicinae"
replace: "Exec=/usr/local/bin/vicinae"
become: true
loop:
- vicinae.desktop
- vicinae-url-handler.desktop
- name: Ensure Vicinae share directory exists
ansible.builtin.file:
path: "/usr/share/vicinae"
state: directory
mode: "0755"
become: true
- name: Copy Vicinae themes directory
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/vicinae/themes/"
dest: "/usr/share/vicinae/themes/"
mode: "0644"
remote_src: true
become: true
- name: Ensure hicolor icons directory exists
ansible.builtin.file:
path: "/usr/share/icons/hicolor/512x512/apps"
state: directory
mode: "0755"
become: true
- name: Copy Vicinae icon
ansible.builtin.copy:
src: "{{ vicinae_temp_dir.path }}/share/icons/hicolor/512x512/apps/vicinae.png"
dest: "/usr/share/icons/hicolor/512x512/apps/vicinae.png"
mode: "0644"
remote_src: true
become: true
- name: Update desktop database
ansible.builtin.command:
cmd: update-desktop-database /usr/share/applications
become: true
changed_when: false
- name: Update icon cache
ansible.builtin.command:
cmd: gtk-update-icon-cache /usr/share/icons/hicolor
become: true
changed_when: false
failed_when: false
- name: Clean up temporary directory
ansible.builtin.file:
path: "{{ vicinae_temp_dir.path }}"
state: absent
- name: Verify Vicinae installation
ansible.builtin.command:
cmd: /usr/local/bin/vicinae --version
register: vicinae_version_check
changed_when: false
failed_when: false
- name: Display installation result
ansible.builtin.debug:
msg: |
{% if vicinae_version_check.rc == 0 %}
✓ Vicinae AppImage installed successfully with all themes and assets!
Version: {{ vicinae_version_check.stdout }}
{% else %}
✗ Vicinae installation completed but version check failed.
This may be normal if --version flag is not supported.
Try running: vicinae
{% endif %}
tags:
- vicinae

View File

@@ -42,6 +42,10 @@
ansible.builtin.import_tasks: tasks/workstations/autostart.yml ansible.builtin.import_tasks: tasks/workstations/autostart.yml
when: "'microsoft-standard-WSL2' not in ansible_kernel" when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Include Vicinae tasks
ansible.builtin.import_tasks: tasks/workstations/vicinae.yml
when: "'microsoft-standard-WSL2' not in ansible_kernel"
- name: Ensure workstation common packages are installed - name: Ensure workstation common packages are installed
ansible.builtin.package: ansible.builtin.package:
name: name:

View File

@@ -10,6 +10,7 @@
// ############################################# // #############################################
// ## Theming ## // ## Theming ##
// ############################################# // #############################################
"formatter": "prettier",
"context_servers": { "context_servers": {
"mcp-server-context7": { "mcp-server-context7": {
"source": "extension", "source": "extension",
@@ -96,7 +97,7 @@
"hide_mouse": "on_typing", "hide_mouse": "on_typing",
"on_last_window_closed": "quit_app", "on_last_window_closed": "quit_app",
"ensure_final_newline_on_save": true, "ensure_final_newline_on_save": true,
"format_on_save": "prettier", "format_on_save": "on",
"tab_size": 2, "tab_size": 2,
"inlay_hints": { "inlay_hints": {
"enabled": true, "enabled": true,

View File

@@ -26,6 +26,7 @@ def main():
printfe("red", f"Error reading help file: {e}") printfe("red", f"Error reading help file: {e}")
return 1 return 1
print(help_text)
println(" ", "cyan") println(" ", "cyan")
return 0 return 0

View File

@@ -5,10 +5,12 @@ import signal
import subprocess import subprocess
import sys import sys
def signal_handler(sig, frame): def signal_handler(sig, frame):
print('Exiting.') print("Exiting.")
sys.exit(0) sys.exit(0)
signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGINT, signal_handler)
# Script constants # Script constants
@@ -22,43 +24,54 @@ from helpers.functions import printfe, ensure_dependencies
ensure_dependencies() ensure_dependencies()
def run_script(script_path, args): def run_script(script_path, args):
"""Run an action script with the given arguments""" """Run an action script with the given arguments"""
if not os.path.isfile(script_path) or not os.access(script_path, os.X_OK): if not os.path.isfile(script_path) or not os.access(script_path, os.X_OK):
printfe("red", f"Error: Script not found or not executable: {script_path}") printfe("red", f"Error: Script not found or not executable: {script_path}")
return 1 return 1
result = subprocess.run([script_path] + args, env={**os.environ, "DOTFILES_PATH": DOTFILES_PATH}) result = subprocess.run(
[script_path] + args, env={**os.environ, "DOTFILES_PATH": DOTFILES_PATH}
)
return result.returncode return result.returncode
def update(args): def update(args):
"""Run the update action""" """Run the update action"""
return run_script(f"{DOTFILES_BIN}/actions/update.py", args) return run_script(f"{DOTFILES_BIN}/actions/update.py", args)
def hello(args): def hello(args):
"""Run the hello action""" """Run the hello action"""
return run_script(f"{DOTFILES_BIN}/actions/hello.py", args) return run_script(f"{DOTFILES_BIN}/actions/hello.py", args)
def help(args): def help(args):
"""Run the help action""" """Run the help action"""
return run_script(f"{DOTFILES_BIN}/actions/help.py", args) return run_script(f"{DOTFILES_BIN}/actions/help.py", args)
def service(args): def service(args):
"""Run the service/docker action""" """Run the service/docker action"""
return run_script(f"{DOTFILES_BIN}/actions/service.py", args) return run_script(f"{DOTFILES_BIN}/actions/service.py", args)
def lint(args): def lint(args):
"""Run the lint action""" """Run the lint action"""
return run_script(f"{DOTFILES_BIN}/actions/lint.py", args) return run_script(f"{DOTFILES_BIN}/actions/lint.py", args)
def timers(args): def timers(args):
"""Run the timers action""" """Run the timers action"""
return run_script(f"{DOTFILES_BIN}/actions/timers.py", args) return run_script(f"{DOTFILES_BIN}/actions/timers.py", args)
def source(args): def source(args):
"""Run the source action""" """Run the source action"""
return run_script(f"{DOTFILES_BIN}/actions/source.py", args) return run_script(f"{DOTFILES_BIN}/actions/source.py", args)
def ensure_git_hooks(): def ensure_git_hooks():
"""Ensure git hooks are correctly set up""" """Ensure git hooks are correctly set up"""
hooks_dir = os.path.join(DOTFILES_ROOT, ".git/hooks") hooks_dir = os.path.join(DOTFILES_ROOT, ".git/hooks")
@@ -66,14 +79,19 @@ def ensure_git_hooks():
# Validate target directory exists # Validate target directory exists
if not os.path.isdir(target_link): if not os.path.isdir(target_link):
printfe("red", f"Error: Git hooks source directory does not exist: {target_link}") printfe(
"red", f"Error: Git hooks source directory does not exist: {target_link}"
)
return 1 return 1
# Handle existing symlink # Handle existing symlink
if os.path.islink(hooks_dir): if os.path.islink(hooks_dir):
current_link = os.readlink(hooks_dir) current_link = os.readlink(hooks_dir)
if current_link != target_link: if current_link != target_link:
printfe("yellow", "Incorrect git hooks symlink found. Removing and recreating...") printfe(
"yellow",
"Incorrect git hooks symlink found. Removing and recreating...",
)
os.remove(hooks_dir) os.remove(hooks_dir)
else: else:
return 0 return 0
@@ -82,6 +100,7 @@ def ensure_git_hooks():
if os.path.isdir(hooks_dir) and not os.path.islink(hooks_dir): if os.path.isdir(hooks_dir) and not os.path.islink(hooks_dir):
printfe("yellow", "Removing existing hooks directory...") printfe("yellow", "Removing existing hooks directory...")
import shutil import shutil
shutil.rmtree(hooks_dir) shutil.rmtree(hooks_dir)
# Create new symlink # Create new symlink
@@ -93,6 +112,7 @@ def ensure_git_hooks():
printfe("red", f"Failed to create git hooks symlink: {e}") printfe("red", f"Failed to create git hooks symlink: {e}")
return 1 return 1
def main(): def main():
# Ensure we're in the correct directory # Ensure we're in the correct directory
if not os.path.isdir(DOTFILES_ROOT): if not os.path.isdir(DOTFILES_ROOT):
@@ -114,13 +134,42 @@ def main():
"service": service, "service": service,
"lint": lint, "lint": lint,
"timers": timers, "timers": timers,
"source": source "source": source,
} }
if command in commands: if command in commands:
return commands[command](args) return commands[command](args)
else: else:
# For invalid commands, show error after logo
if command != "help":
from helpers.functions import logo
logo(continue_after=True)
print()
printfe("red", f"✗ Error: Unknown command '{command}'")
# Provide helpful hints for common mistakes
if command == "ls":
printfe("yellow", " Hint: Did you mean 'dotf service ls'?")
elif command == "list":
printfe("yellow", " Hint: Did you mean 'dotf service list'?")
print()
# Now print help text without logo
dotfiles_path = os.environ.get(
"DOTFILES_PATH", os.path.expanduser("~/.dotfiles")
)
try:
with open(
f"{dotfiles_path}/bin/resources/help.txt", "r", encoding="utf-8"
) as f:
print(f.read())
except OSError as e:
printfe("red", f"Error reading help file: {e}")
return 1
return 1
return help([]) return help([])
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(main())

0
config/autostart/Nextcloud.desktop Normal file → Executable file
View File

0
config/autostart/equibop.desktop Normal file → Executable file
View File

View File

@@ -39,6 +39,7 @@
export STARSHIP_ENABLE_RIGHT_PROMPT="true" export STARSHIP_ENABLE_RIGHT_PROMPT="true"
export STARSHIP_ENABLE_BASH_COMPLETION="true" export STARSHIP_ENABLE_BASH_COMPLETION="true"
export XDG_DATA_DIRS="/usr/share:/var/lib/flatpak/exports/share:${config.home.homeDirectory}/.local/share/flatpak/exports/share" export XDG_DATA_DIRS="/usr/share:/var/lib/flatpak/exports/share:${config.home.homeDirectory}/.local/share/flatpak/exports/share"
export BUN_INSTALL="$HOME/.bun"
# Source .profile (If exists) # Source .profile (If exists)
if [ -f "${config.home.homeDirectory}/.profile" ]; then if [ -f "${config.home.homeDirectory}/.profile" ]; then
@@ -81,6 +82,8 @@
if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then if [[ "$(uname -a)" == *"microsoft-standard-WSL2"* ]]; then
[ -f "${config.home.homeDirectory}/.agent-bridge.sh" ] && source "${config.home.homeDirectory}/.agent-bridge.sh" [ -f "${config.home.homeDirectory}/.agent-bridge.sh" ] && source "${config.home.homeDirectory}/.agent-bridge.sh"
alias winget='winget.exe' alias winget='winget.exe'
alias ssh-add="ssh-add.exe"
alias git="git.exe"
fi fi
# Set SSH_AUTH_SOCK to 1Password agent if not already set # Set SSH_AUTH_SOCK to 1Password agent if not already set
@@ -189,10 +192,6 @@
# Kubernetes aliases # Kubernetes aliases
"kubectl" = "minikube kubectl --"; "kubectl" = "minikube kubectl --";
# Editor aliases
"zeditor" = "${config.home.homeDirectory}/.local/bin/zed";
"zed" = "${config.home.homeDirectory}/.local/bin/zed";
# SSH alias # SSH alias
"ssh" = "${config.home.homeDirectory}/.local/bin/smart-ssh"; "ssh" = "${config.home.homeDirectory}/.local/bin/smart-ssh";
@@ -212,6 +211,7 @@
export PATH="$PATH:${config.home.homeDirectory}/.cargo/bin" export PATH="$PATH:${config.home.homeDirectory}/.cargo/bin"
export PATH="$PATH:${config.home.homeDirectory}/.dotfiles/bin" export PATH="$PATH:${config.home.homeDirectory}/.dotfiles/bin"
export PATH="/usr/bin:$PATH" export PATH="/usr/bin:$PATH"
export PATH="$BUN_INSTALL/bin:$PATH"
# PKG_CONFIG_PATH # PKG_CONFIG_PATH
if [ -d /usr/lib/pkgconfig ]; then if [ -d /usr/lib/pkgconfig ]; then

View File

@@ -1,8 +1,80 @@
[General] [General]
clientVersion=3.16.0-1 (Debian built) clientVersion=3.16.0-1 (Debian built)
desktopEnterpriseChannel=daily
isVfsEnabled=false isVfsEnabled=false
launchOnSystemStartup=true launchOnSystemStartup=true
optionalServerNotifications=true
overrideLocalDir=
overrideServerUrl=
promptDeleteAllFiles=false promptDeleteAllFiles=false
showCallNotifications=true
showChatNotifications=true
[Accounts] [Accounts]
0\Folders\1\ignoreHiddenFiles=false
0\Folders\1\journalPath=.sync_42a4129584d0.db
0\Folders\1\localPath=/home/menno/Nextcloud/
0\Folders\1\paused=false
0\Folders\1\targetPath=/
0\Folders\1\version=2
0\Folders\1\virtualFilesMode=off
0\Folders\2\ignoreHiddenFiles=false
0\Folders\2\journalPath=.sync_65a742b0aa83.db
0\Folders\2\localPath=/home/menno/Desktop/
0\Folders\2\paused=false
0\Folders\2\targetPath=/Desktop
0\Folders\2\version=2
0\Folders\2\virtualFilesMode=off
0\Folders\3\ignoreHiddenFiles=false
0\Folders\3\journalPath=.sync_65289e64a490.db
0\Folders\3\localPath=/home/menno/Documents/
0\Folders\3\paused=false
0\Folders\3\targetPath=/Documents
0\Folders\3\version=2
0\Folders\3\virtualFilesMode=off
0\Folders\4\ignoreHiddenFiles=false
0\Folders\4\journalPath=.sync_283a65eecb9c.db
0\Folders\4\localPath=/home/menno/Music/
0\Folders\4\paused=false
0\Folders\4\targetPath=/Music
0\Folders\4\version=2
0\Folders\4\virtualFilesMode=off
0\Folders\5\ignoreHiddenFiles=false
0\Folders\5\journalPath=.sync_884042991bd6.db
0\Folders\5\localPath=/home/menno/3D Objects/
0\Folders\5\paused=false
0\Folders\5\targetPath=/3D Objects
0\Folders\5\version=2
0\Folders\5\virtualFilesMode=off
0\Folders\6\ignoreHiddenFiles=false
0\Folders\6\journalPath=.sync_90ea5e3c7a33.db
0\Folders\6\localPath=/home/menno/Videos/
0\Folders\6\paused=false
0\Folders\6\targetPath=/Videos
0\Folders\6\version=2
0\Folders\6\virtualFilesMode=off
0\authType=webflow
0\dav_user=menno
0\displayName=Menno van Leeuwen
0\encryptionCertificateSha256Fingerprint=@ByteArray()
0\networkDownloadLimit=0
0\networkDownloadLimitSetting=-2
0\networkProxyHostName=
0\networkProxyNeedsAuth=false
0\networkProxyPort=0
0\networkProxySetting=0
0\networkProxyType=2
0\networkProxyUser=
0\networkUploadLimit=0
0\networkUploadLimitSetting=-2
0\serverColor=@Variant(\0\0\0\x43\x1\xff\xff\x1c\x1c$$<<\0\0)
0\serverHasValidSubscription=false
0\serverTextColor=@Variant(\0\0\0\x43\x1\xff\xff\xff\xff\xff\xff\xff\xff\0\0)
0\serverVersion=32.0.0.13
0\url=https://drive.mvl.sh
0\version=13
0\webflow_user=menno
version=13 version=13
[Settings]
geometry=@ByteArray(\x1\xd9\xd0\xcb\0\x3\0\0\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W\0\0\0\x1\0\0\0\0\x14\0\0\0\0\0\0\0\x4\xe\0\0\x2\x37\0\0\x6W)

12
flake.lock generated
View File

@@ -41,11 +41,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1759994382, "lastModified": 1761597516,
"narHash": "sha256-wSK+3UkalDZRVHGCRikZ//CyZUJWDJkBDTQX1+G77Ow=", "narHash": "sha256-wxX7u6D2rpkJLWkZ2E932SIvDJW8+ON/0Yy8+a5vsDU=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "5da4a26309e796daa7ffca72df93dbe53b8164c7", "rev": "daf6dc47aa4b44791372d6139ab7b25269184d55",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -77,11 +77,11 @@
"nixpkgs": "nixpkgs_2" "nixpkgs": "nixpkgs_2"
}, },
"locked": { "locked": {
"lastModified": 1751283143, "lastModified": 1761503988,
"narHash": "sha256-I3DMLT0qg5xxjS7BrmOBIK6pG+vZqOhKivEGnkDIli8=", "narHash": "sha256-MlMZXCTtPeXq/cDtJcL2XM8wCN33XOT9V2dB3PLV6f0=",
"owner": "brizzbuzz", "owner": "brizzbuzz",
"repo": "opnix", "repo": "opnix",
"rev": "1a807befe8f418da0df24c54b9633c395d840d0e", "rev": "48fdb078b5a1cd0b20b501fccf6be2d1279d6fe6",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -45,6 +45,7 @@
"mennos-server" = mkHomeConfig "x86_64-linux" "mennos-server" true; "mennos-server" = mkHomeConfig "x86_64-linux" "mennos-server" true;
"mennos-rtlsdr-pc" = mkHomeConfig "x86_64-linux" "mennos-rtlsdr-pc" true; "mennos-rtlsdr-pc" = mkHomeConfig "x86_64-linux" "mennos-rtlsdr-pc" true;
"mennos-laptop" = mkHomeConfig "x86_64-linux" "mennos-laptop" false; "mennos-laptop" = mkHomeConfig "x86_64-linux" "mennos-laptop" false;
"mennos-desktopw" = mkHomeConfig "x86_64-linux" "mennos-desktopw" true;
}; };
}; };
} }

View File

@@ -166,6 +166,13 @@ validate_hostname() {
return 0 return 0
} }
is_wsl() {
if grep -qEi "(Microsoft|WSL)" /proc/version &> /dev/null; then
return 0
fi
return 1
}
update_home_manager_flake() { update_home_manager_flake() {
local hostname="$1" local hostname="$1"
local isServer="$2" local isServer="$2"
@@ -290,7 +297,15 @@ prepare_hostname() {
fi fi
log_info "Setting hostname to $hostname..." log_info "Setting hostname to $hostname..."
sudo hostnamectl set-hostname "$hostname" || die "Failed to set hostname"
# WSL doesn't support hostnamectl reliably, use /etc/hostname instead
if is_wsl; then
log_info "Detected WSL environment, using alternative hostname method..."
echo "$hostname" | sudo tee /etc/hostname > /dev/null || die "Failed to set hostname"
sudo hostname "$hostname" || log_warning "Failed to set hostname for current session (will take effect on restart)"
else
sudo hostnamectl set-hostname "$hostname" || die "Failed to set hostname"
fi
echo "$hostname" > "$hostname_file" || die "Failed to save hostname" echo "$hostname" > "$hostname_file" || die "Failed to save hostname"
log_success "Hostname set successfully." log_success "Hostname set successfully."
@@ -301,7 +316,14 @@ warning_prompt() {
log_error "Please ensure you have a backup of your data before proceeding." log_error "Please ensure you have a backup of your data before proceeding."
log_error "This script will modify system files and may require sudo permissions." log_error "This script will modify system files and may require sudo permissions."
echo "" echo ""
log_info "This script has been tested on Ubuntu 22.04, 24.04, 24.10, Pop!_OS 24.04 Alpha 7, Debian 12, Fedora 41 and CachyOS."
if is_wsl; then
log_info "WSL environment detected."
log_info "This script has been tested on Ubuntu under WSL2."
else
log_info "This script has been tested on Ubuntu 22.04, 24.04, 24.10, Pop!_OS 24.04 Alpha 7, Debian 12, Fedora 41 and CachyOS."
fi
log_info "Setup starts in 10 seconds, to abort use Ctrl+C to exit NOW." log_info "Setup starts in 10 seconds, to abort use Ctrl+C to exit NOW."
echo "" echo ""
sleep 10 sleep 10
@@ -397,6 +419,11 @@ check_compatibility() {
local distro local distro
distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"') distro=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"')
# Special handling for WSL
if is_wsl; then
log_info "Running in WSL environment."
fi
case "$distro" in case "$distro" in
Fedora*) Fedora*)
log_success "Detected Fedora. Proceeding with setup..." log_success "Detected Fedora. Proceeding with setup..."
@@ -413,9 +440,11 @@ check_compatibility() {
;; ;;
Debian*) Debian*)
log_success "Detected Debian. Proceeding with setup..." log_success "Detected Debian. Proceeding with setup..."
log_warning "Debian has known issues with ZFS kernel modules, you might need to manually install it to make ZFS work." if ! is_wsl; then
log_warning "Continueing in 5 seconds..." log_warning "Debian has known issues with ZFS kernel modules, you might need to manually install it to make ZFS work."
sleep 5 log_warning "Continueing in 5 seconds..."
sleep 5
fi
check_command_availibility "apt" check_command_availibility "apt"
;; ;;
Pop!_OS*) Pop!_OS*)