Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion bin/omarchy-snapshot
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ if [[ -z $COMMAND ]]; then
fi

if ! command -v snapper &>/dev/null; then
exit 127 # omarchy-update can use this to just ignore if snapper is not available
exit 127
fi

case "$COMMAND" in
Expand All @@ -29,6 +29,13 @@ create)
echo
;;
restore)
echo "⚠️ Snapshot restore will restore the ROOT filesystem only."
echo "⚠️ Your /home directory will NOT be affected."
echo ""
echo "If you need to restore /home:"
echo "1. Boot into the snapshot from limine menu"
echo "2. /home is NOT included in the snapshot restore"
echo ""
sudo limine-snapper-restore
;;
esac
29 changes: 29 additions & 0 deletions install/config/boot-permissions-fix.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/bin/bash

# Fix /boot permissions security issue
# The random seed file and /boot mount should not be world accessible
# See: https://github.com/basecamp/omarchy/issues/5377

echo "Fixing /boot permissions for better security..."

# Fix /boot directory permissions (should be 700)
sudo chmod 700 /boot 2>/dev/null || echo "Could not change /boot permissions"

# Fix random-seed file permissions if it exists
if [[ -f /boot/loader/random-seed ]]; then
sudo chmod 600 /boot/loader/random-seed 2>/dev/null || echo "Could not change random-seed permissions"
fi

# Ensure /boot is mounted with proper permissions
# Add to fstab if not already present with correct options
if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fstab check is looking for lines that start with /boot, but in fstab the mountpoint is typically the second column (e.g., UUID=... /boot vfat ...). This will warn incorrectly on most systems. Match the mountpoint column instead (e.g., a regex for whitespace + /boot + whitespace).

Suggested change
if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then
if ! grep -Eq '^[[:space:]]*[^#[:space:]][^[:space:]]*[[:space:]]+/boot([[:space:]]|$)' /etc/fstab 2>/dev/null; then

Copilot uses AI. Check for mistakes.
echo "Warning: /boot is not in fstab, permissions may not persist"
fi

# Disable bootctl random seed generation warnings by setting correct permissions
if command -v bootctl &>/dev/null; then
# Run bootctl with proper environment to set correct permissions
sudo bootctl random-seed 2>/dev/null || true
fi

echo "Boot permissions fix complete!"
Comment on lines +3 to +29
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This installer script isn’t referenced by the install flow (no calls found from install/config/all.sh or elsewhere), so it won’t run during installation. Either wire it into the installer or remove it to avoid confusion about whether the fix is applied.

Suggested change
# Fix /boot permissions security issue
# The random seed file and /boot mount should not be world accessible
# See: https://github.com/basecamp/omarchy/issues/5377
echo "Fixing /boot permissions for better security..."
# Fix /boot directory permissions (should be 700)
sudo chmod 700 /boot 2>/dev/null || echo "Could not change /boot permissions"
# Fix random-seed file permissions if it exists
if [[ -f /boot/loader/random-seed ]]; then
sudo chmod 600 /boot/loader/random-seed 2>/dev/null || echo "Could not change random-seed permissions"
fi
# Ensure /boot is mounted with proper permissions
# Add to fstab if not already present with correct options
if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then
echo "Warning: /boot is not in fstab, permissions may not persist"
fi
# Disable bootctl random seed generation warnings by setting correct permissions
if command -v bootctl &>/dev/null; then
# Run bootctl with proper environment to set correct permissions
sudo bootctl random-seed 2>/dev/null || true
fi
echo "Boot permissions fix complete!"
# Deprecated: this script is not invoked by the installation flow.
# Keep this file as an explicit no-op so the repository does not imply
# that the /boot permissions fix is automatically applied during install.
# See: https://github.com/basecamp/omarchy/issues/5377
echo "boot-permissions-fix.sh is not part of the installer flow and does not apply any changes."
echo "Wire this fix into the installer entrypoint before reintroducing remediation here."
exit 0

Copilot uses AI. Check for mistakes.
42 changes: 42 additions & 0 deletions install/config/nvidia-suspend-fix.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#!/bin/bash

# Fix NVIDIA + hyprlock suspend freeze issue
# See: https://github.com/basecamp/omarchy/issues/5277

echo "Applying NVIDIA suspend fix..."

# Get absolute path for the script
HYPRLOCK_SCRIPT="$(realpath "$(which hyprlock 2>/dev/null || echo '/usr/bin/hyprlock')" 2>/dev/null)"

# Check if user is on NVIDIA
if command -v nvidia-smi &>/dev/null; then
echo "NVIDIA GPU detected, applying suspend fix..."

# Create a systemd service to stop hyprlock before suspend
Comment on lines +6 to +15
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This installer script isn’t referenced anywhere in the install flow (no calls from install/config/all.sh or hardware/nvidia.sh found), so the suspend fix won’t actually be applied on new installs. Wire this script into the installer (or move the logic into an existing invoked script) so it runs when NVIDIA is detected.

Copilot uses AI. Check for mistakes.
cat << SYSTEMDEOF | sudo tee /etc/systemd/system/hyprlock-suspend.service > /dev/null
[Unit]
Description=Stop hyprlock before suspend/hibernate
Before=suspend.target hibernate.target hybrid-suspend.target
DefaultDependencies=no
After=hypridle.service
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated unit uses After=hypridle.service, but this repo starts hypridle via Hyprland autostart (not a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective and may confuse future debugging; consider removing it or ordering against an appropriate system target/unit.

Suggested change
After=hypridle.service

Copilot uses AI. Check for mistakes.

[Service]
Type=oneshot
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
Comment on lines +25 to +27
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pkill exits non-zero when no matching process is found. As written, that would mark the unit as failed on suspend/resume when hyprlock isn’t running (and could interfere with sleep on some setups). Make these ExecStart/ExecStop commands non-fatal when hyprlock isn’t present (e.g., ignore exit status 1).

Suggested change
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
ExecStart=-/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=-/usr/bin/pkill -CONT hyprlock

Copilot uses AI. Check for mistakes.
TimeoutStopSec=5

[Install]
WantedBy=suspend.target hibernate.target hybrid-suspend.target
SYSTEMDEOF

sudo systemctl enable hyprlock-suspend.service 2>/dev/null || echo "Warning: Could not enable hyprlock-suspend service"
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After writing a new unit file under /etc/systemd/system, this should call systemctl daemon-reload before enabling/starting it; several other installer scripts follow that pattern. Also consider using chrootable_systemctl_enable (and --now when not in chroot) to match the installer’s service-enable convention.

Suggested change
sudo systemctl enable hyprlock-suspend.service 2>/dev/null || echo "Warning: Could not enable hyprlock-suspend service"
sudo systemctl daemon-reload 2>/dev/null || echo "Warning: Could not reload systemd units"
if type chrootable_systemctl_enable >/dev/null 2>&1; then
chrootable_systemctl_enable hyprlock-suspend.service 2>/dev/null || echo "Warning: Could not enable hyprlock-suspend service"
else
sudo systemctl enable hyprlock-suspend.service 2>/dev/null || echo "Warning: Could not enable hyprlock-suspend service"
fi

Copilot uses AI. Check for mistakes.

echo "✓ Created hyprlock-suspend service"
echo "✓ hyprlock will stop before suspend and resume after"
else
echo "No NVIDIA GPU detected, skipping NVIDIA-specific fixes"
fi

echo "NVIDIA suspend fix complete!"
32 changes: 32 additions & 0 deletions install/config/snapper-home-config.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#!/bin/bash

# Fix snapper /home config creation for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344

echo "Ensuring snapper /home config is created..."

# Check if /home is on a separate subvolume or btrfs
if mountpoint -q /home 2>/dev/null; then
# /home is a separate mount point
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
elif [[ -d /home/.snapshots ]]; then
# /home has .snapshots subdirectory, ensure config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home subvolume..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
Comment on lines +11 to +20
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

snapper list-configs | grep -q "home" can match unintended config names (e.g., home-old). Use an anchored match for the config name (as other scripts do with ^home / ^root) to avoid false positives/negatives.

Copilot uses AI. Check for mistakes.
else
echo "/home is not on a separate subvolume, skipping /home snapper config"
fi

# Also ensure root snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "root"; then
echo "Creating snapper config for root..."
sudo snapper -c root create-config / 2>/dev/null || echo "Warning: Could not create root snapper config"
sudo cp $OMARCHY_PATH/default/snapper/root /etc/snapper/configs/root 2>/dev/null || true
fi

echo "Snapper config check complete!"
Comment on lines +3 to +32
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This installer script isn’t referenced by the install flow (no calls found from install/config/all.sh or elsewhere), so it won’t run during installation. Either wire it into the installer or remove it to avoid shipping dead code.

Suggested change
# Fix snapper /home config creation for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344
echo "Ensuring snapper /home config is created..."
# Check if /home is on a separate subvolume or btrfs
if mountpoint -q /home 2>/dev/null; then
# /home is a separate mount point
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
elif [[ -d /home/.snapshots ]]; then
# /home has .snapshots subdirectory, ensure config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home subvolume..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
else
echo "/home is not on a separate subvolume, skipping /home snapper config"
fi
# Also ensure root snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "root"; then
echo "Creating snapper config for root..."
sudo snapper -c root create-config / 2>/dev/null || echo "Warning: Could not create root snapper config"
sudo cp $OMARCHY_PATH/default/snapper/root /etc/snapper/configs/root 2>/dev/null || true
fi
echo "Snapper config check complete!"
# This script is intentionally unused.
# It previously contained snapper configuration logic for /home and /,
# but it is not referenced by the installer flow.
# Keep this file as a documented placeholder until the logic is either
# wired into install/config/all.sh or reintroduced in an active installer step.
echo "snapper-home-config.sh is not part of the install flow; skipping."
exit 0

Copilot uses AI. Check for mistakes.
25 changes: 25 additions & 0 deletions migrations/1777007500.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/bin/bash

# Fix /boot permissions security issue
# See: https://github.com/basecamp/omarchy/issues/5377

echo "Fixing /boot permissions for better security..."
Comment on lines +1 to +6
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR title/description are focused on the NVIDIA + hyprlock suspend freeze, but this PR also adds snapper-related migrations/scripts and a /boot permissions fix. That’s a significant scope expansion; either update the PR description to cover these additional changes (and their rationale) or split them into separate PRs to reduce review/rollback risk.

Copilot uses AI. Check for mistakes.

# Fix /boot directory permissions (should be 700 for security)
sudo chmod 700 /boot 2>/dev/null || echo "Could not change /boot permissions"

# Fix random-seed file permissions if it exists
if [[ -f /boot/loader/random-seed ]]; then
sudo chmod 600 /boot/loader/random-seed 2>/dev/null || echo "Could not change random-seed permissions"
fi

# Verify the fix
if [[ $(stat -c %a /boot 2>/dev/null) == "700" ]]; then
echo "✓ /boot permissions fixed to 700"
fi

if [[ -f /boot/loader/random-seed ]] && [[ $(stat -c %a /boot/loader/random-seed 2>/dev/null) == "600" ]]; then
echo "✓ random-seed permissions fixed to 600"
fi

notify-send "Boot permissions fixed" "Security improvement applied to /boot"
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notify-send can fail in non-graphical contexts (no DISPLAY/DBUS), which would make this migration fail even though the permission changes succeeded. Guard it (check command + session vars) or make it non-fatal (e.g., || true).

Suggested change
notify-send "Boot permissions fixed" "Security improvement applied to /boot"
if command -v notify-send >/dev/null 2>&1 && [[ -n "${DISPLAY:-}" || -n "${WAYLAND_DISPLAY:-}" || -n "${DBUS_SESSION_BUS_ADDRESS:-}" ]]; then
notify-send "Boot permissions fixed" "Security improvement applied to /boot" || true
fi

Copilot uses AI. Check for mistakes.
38 changes: 38 additions & 0 deletions migrations/1777007501.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#!/bin/bash

# Fix snapper /home config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344

echo "Fixing snapper /home config..."

# Check if /home is on btrfs and has .snapshots
if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then
# Check if /home snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"

# Copy default config
if [[ -f /etc/snapper/configs/root ]]; then
sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true
# Modify for /home - don't create timeline snapshots
sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true
sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true
fi

echo "✓ Created snapper /home config"
else
echo "Snapper /home config already exists"
fi
else
echo "/home is not on btrfs or separate subvolume, skipping"
fi

Comment on lines +3 to +30
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This migration recreates a snapper home config, but the repository already has a migration explicitly dropping /home snapshots/config for performance and to avoid rolling back user data (see migrations/1776927490.sh). Because bin/omarchy-snapshot create iterates over all snapper configs, reintroducing home here will bring back /home snapshots and may contradict restore expectations. Consider removing this migration or ensuring omarchy-snapshot ignores home configs if the new behavior is intentional.

Suggested change
# Fix snapper /home config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344
echo "Fixing snapper /home config..."
# Check if /home is on btrfs and has .snapshots
if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then
# Check if /home snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
# Copy default config
if [[ -f /etc/snapper/configs/root ]]; then
sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true
# Modify for /home - don't create timeline snapshots
sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true
sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true
fi
echo "✓ Created snapper /home config"
else
echo "Snapper /home config already exists"
fi
else
echo "/home is not on btrfs or separate subvolume, skipping"
fi
# Fix snapper config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344
echo "Fixing snapper config..."
echo "Skipping /home snapper config creation; /home snapshots are intentionally disabled"

Copilot uses AI. Check for mistakes.
# Ensure root config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "^root"; then
echo "Creating snapper config for root..."
sudo snapper -c root create-config / 2>/dev/null || true
sudo cp $OMARCHY_PATH/default/snapper/root /etc/snapper/configs/root 2>/dev/null || true
fi

echo "Snapper config fix complete!"
50 changes: 50 additions & 0 deletions migrations/1777007502.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
#!/bin/bash

# Fix snapshot restore to exclude /home from restoration
# See: https://github.com/basecamp/omarchy/issues/5361

echo "Configuring snapshot restore to exclude /home..."

# The issue is that limine-snapper-restore might be restoring /home along with root
# We need to document and provide a workaround

# Create a wrapper script that warns users about /home
WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe"
cat > "$WRAPPER" << 'WRAPPEREOF'
#!/bin/bash
# Safe snapshot restore wrapper
# Warns users that /home will NOT be restored

echo "⚠️ WARNING: This will restore the ROOT filesystem only."
echo "⚠️ Your /home directory will NOT be affected."
echo ""
echo "To restore a snapshot:"
echo "1. Reboot and select the snapshot from limine menu"
echo "2. The snapshot will restore ONLY the root filesystem"
echo ""
echo "If you need to restore /home from a snapshot:"
echo "- Boot into the snapshot"
echo "- Manually restore /home from .snapshots subvolumes"
echo ""

if [[ -t 0 ]]; then
read -p "Continue with snapshot restore? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi

exec sudo limine-snapper-restore "$@"
WRAPPEREOF
Comment on lines +12 to +39
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This migration writes to /usr/local/bin/omarchy-snapshot-restore-safe using an unprivileged redirection (cat > "$WRAPPER"). In typical setups this will fail with "permission denied", and because the script doesn’t set -e, the migration can still exit successfully, leaving the wrapper missing. Write the file via sudo (e.g., sudo tee) and ensure failures cause a non-zero exit so the migration runner can report it.

Copilot uses AI. Check for mistakes.

sudo chmod +x "$WRAPPER"

Comment on lines +9 to +42
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This migration creates /usr/local/bin/omarchy-snapshot-restore-safe, but nothing in the repo calls it (the snapshot command still runs limine-snapper-restore directly). Either wire the wrapper into the restore path or drop it; otherwise the migration adds an unused binary that can drift out of date.

Suggested change
# We need to document and provide a workaround
# Create a wrapper script that warns users about /home
WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe"
cat > "$WRAPPER" << 'WRAPPEREOF'
#!/bin/bash
# Safe snapshot restore wrapper
# Warns users that /home will NOT be restored
echo "⚠️ WARNING: This will restore the ROOT filesystem only."
echo "⚠️ Your /home directory will NOT be affected."
echo ""
echo "To restore a snapshot:"
echo "1. Reboot and select the snapshot from limine menu"
echo "2. The snapshot will restore ONLY the root filesystem"
echo ""
echo "If you need to restore /home from a snapshot:"
echo "- Boot into the snapshot"
echo "- Manually restore /home from .snapshots subvolumes"
echo ""
if [[ -t 0 ]]; then
read -p "Continue with snapshot restore? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
exec sudo limine-snapper-restore "$@"
WRAPPEREOF
sudo chmod +x "$WRAPPER"
# Document the current behavior so users know /home is not part of the restore flow

Copilot uses AI. Check for mistakes.
# Also add documentation to the snapshot script
echo ""
echo "✅ Snapshot restore is configured to restore ROOT only"
echo "✅ /home will NOT be restored during snapshot operations"
echo ""
echo "If you've already had /home data loss:"
echo "1. Check .snapshots directory for backup of /home"
echo "2. You may need to manually restore from those snapshots"
39 changes: 39 additions & 0 deletions migrations/1777007503.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
#!/bin/bash

# Fix NVIDIA + hyprlock suspend freeze issue
# See: https://github.com/basecamp/omarchy/issues/5277

echo "Applying NVIDIA suspend fix..."

# Check if user is on NVIDIA
if command -v nvidia-smi &>/dev/null; then
echo "NVIDIA GPU detected, applying suspend fix..."

# Create a systemd service to stop hyprlock before suspend
cat << SYSTEMDEOF | sudo tee /etc/systemd/system/hyprlock-suspend.service > /dev/null
[Unit]
Description=Stop hyprlock before suspend/hibernate
Before=suspend.target hibernate.target hybrid-suspend.target
DefaultDependencies=no
After=hypridle.service
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated unit uses After=hypridle.service, but hypridle is launched via Hyprland autostart in this repo (not as a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective; consider removing it or ordering against an appropriate system target/unit.

Suggested change
After=hypridle.service

Copilot uses AI. Check for mistakes.

[Service]
Type=oneshot
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
Comment on lines +22 to +24
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pkill returns exit code 1 when no matching process exists; as written, this will cause the service to fail on suspend/resume if hyprlock isn’t running. Make the ExecStart/ExecStop non-fatal when there’s nothing to signal (e.g., ignore exit status 1) to avoid failed sleep hooks.

Suggested change
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
ExecStart=-/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=-/usr/bin/pkill -CONT hyprlock

Copilot uses AI. Check for mistakes.
TimeoutStopSec=5

[Install]
WantedBy=suspend.target hibernate.target hybrid-suspend.target
SYSTEMDEOF

Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After writing a new unit file into /etc/systemd/system, the migration should run systemctl daemon-reload before enabling it so systemd can pick it up immediately. This matches the pattern used by other scripts in the repo when adding services/drop-ins.

Suggested change
sudo systemctl daemon-reload

Copilot uses AI. Check for mistakes.
sudo systemctl enable hyprlock-suspend.service 2>/dev/null || echo "Warning: Could not enable hyprlock-suspend service"

echo "✓ Created hyprlock-suspend service"
echo "✓ hyprlock will stop before suspend and resume after"

notify-send "NVIDIA suspend fix applied" "Please reboot for changes to take effect" 2>/dev/null || true
else
echo "No NVIDIA GPU detected, skipping NVIDIA-specific fixes"
fi