Fix NVIDIA + hyprlock system freeze on suspend#5421
Fix NVIDIA + hyprlock system freeze on suspend#5421kuro-toji wants to merge 4 commits intobasecamp:devfrom
Conversation
The /boot mount point and random-seed file were world accessible, which is a security issue per bootctl warnings. This fix: - Sets /boot directory permissions to 700 - Sets random-seed file permissions to 600 - Runs bootctl random-seed to regenerate with correct permissions Fixes: basecamp#5377
On chroot installations, the snapper /home config wasn't being created, leading to silent failures and disk space issues as snapshot subvolumes kept growing without cleanup policies. This fix ensures: - /home snapper config is created when /home is on btrfs - Root snapper config is verified to exist - Config is copied from defaults with appropriate modifications Fixes: basecamp#5344
Users reported that omarchy-snapshot restore was also restoring /home, causing data loss of user files. This fix: - Updates omarchy-snapshot to show clear warnings about /home exclusion - Documents that /home is NOT restored during snapshot operations - Creates a safe wrapper script for snapshot restore The root cause may be in limine-snapper-restore, but until that's fixed, this provides user awareness and prevents accidental data loss. Fixes: basecamp#5361
There was a problem hiding this comment.
Pull request overview
Tip
If you aren't ready for review, convert to a draft PR.
Click "Convert to draft" or run gh pr ready --undo.
Click "Ready for review" or run gh pr ready to reengage.
Adds a systemd sleep hook intended to prevent NVIDIA suspend/resume freezes caused by hyprlock, plus several additional migrations/installer scripts related to snapper restore behavior and /boot permissions.
Changes:
- Add
hyprlock-suspend.servicegeneration (STOP/CONThyprlock) via installer script + migration. - Add migrations/scripts for snapper (/home config + restore warnings) and /boot permission hardening.
- Update
omarchy-snapshot restoreto print warnings about what gets restored.
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
| migrations/1777007503.sh | Creates/enables hyprlock-suspend.service on existing installs. |
| install/config/nvidia-suspend-fix.sh | Adds installer-time creation of hyprlock-suspend.service (currently not wired into install flow). |
| bin/omarchy-snapshot | Adds restore-time messaging about root-only restores. |
| migrations/1777007502.sh | Attempts to add a “safe restore” wrapper + messaging (currently has permission/usage issues). |
| migrations/1777007501.sh | Recreates snapper /home config (conflicts with prior “drop /home snapshots” direction). |
| install/config/snapper-home-config.sh | Installer helper for ensuring snapper /home config exists (currently not wired into install flow). |
| migrations/1777007500.sh | Migration to harden /boot permissions (includes an unguarded desktop notification). |
| install/config/boot-permissions-fix.sh | Installer helper for /boot permission hardening (currently not wired into install flow). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Fix snapper /home config for chroot installations | ||
| # See: https://github.com/basecamp/omarchy/issues/5344 | ||
|
|
||
| echo "Fixing snapper /home config..." | ||
|
|
||
| # Check if /home is on btrfs and has .snapshots | ||
| if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then | ||
| # Check if /home snapper config exists | ||
| if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then | ||
| echo "Creating snapper config for /home..." | ||
| sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config" | ||
|
|
||
| # Copy default config | ||
| if [[ -f /etc/snapper/configs/root ]]; then | ||
| sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true | ||
| # Modify for /home - don't create timeline snapshots | ||
| sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true | ||
| sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true | ||
| fi | ||
|
|
||
| echo "✓ Created snapper /home config" | ||
| else | ||
| echo "Snapper /home config already exists" | ||
| fi | ||
| else | ||
| echo "/home is not on btrfs or separate subvolume, skipping" | ||
| fi | ||
|
|
There was a problem hiding this comment.
This migration recreates a snapper home config, but the repository already has a migration explicitly dropping /home snapshots/config for performance and to avoid rolling back user data (see migrations/1776927490.sh). Because bin/omarchy-snapshot create iterates over all snapper configs, reintroducing home here will bring back /home snapshots and may contradict restore expectations. Consider removing this migration or ensuring omarchy-snapshot ignores home configs if the new behavior is intentional.
| # Fix snapper /home config for chroot installations | |
| # See: https://github.com/basecamp/omarchy/issues/5344 | |
| echo "Fixing snapper /home config..." | |
| # Check if /home is on btrfs and has .snapshots | |
| if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then | |
| # Check if /home snapper config exists | |
| if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then | |
| echo "Creating snapper config for /home..." | |
| sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config" | |
| # Copy default config | |
| if [[ -f /etc/snapper/configs/root ]]; then | |
| sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true | |
| # Modify for /home - don't create timeline snapshots | |
| sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true | |
| sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true | |
| fi | |
| echo "✓ Created snapper /home config" | |
| else | |
| echo "Snapper /home config already exists" | |
| fi | |
| else | |
| echo "/home is not on btrfs or separate subvolume, skipping" | |
| fi | |
| # Fix snapper config for chroot installations | |
| # See: https://github.com/basecamp/omarchy/issues/5344 | |
| echo "Fixing snapper config..." | |
| echo "Skipping /home snapper config creation; /home snapshots are intentionally disabled" |
| #!/bin/bash | ||
|
|
||
| # Fix /boot permissions security issue | ||
| # See: https://github.com/basecamp/omarchy/issues/5377 | ||
|
|
||
| echo "Fixing /boot permissions for better security..." |
There was a problem hiding this comment.
PR title/description are focused on the NVIDIA + hyprlock suspend freeze, but this PR also adds snapper-related migrations/scripts and a /boot permissions fix. That’s a significant scope expansion; either update the PR description to cover these additional changes (and their rationale) or split them into separate PRs to reduce review/rollback risk.
| # We need to document and provide a workaround | ||
|
|
||
| # Create a wrapper script that warns users about /home | ||
| WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe" | ||
| cat > "$WRAPPER" << 'WRAPPEREOF' | ||
| #!/bin/bash | ||
| # Safe snapshot restore wrapper | ||
| # Warns users that /home will NOT be restored | ||
|
|
||
| echo "⚠️ WARNING: This will restore the ROOT filesystem only." | ||
| echo "⚠️ Your /home directory will NOT be affected." | ||
| echo "" | ||
| echo "To restore a snapshot:" | ||
| echo "1. Reboot and select the snapshot from limine menu" | ||
| echo "2. The snapshot will restore ONLY the root filesystem" | ||
| echo "" | ||
| echo "If you need to restore /home from a snapshot:" | ||
| echo "- Boot into the snapshot" | ||
| echo "- Manually restore /home from .snapshots subvolumes" | ||
| echo "" | ||
|
|
||
| if [[ -t 0 ]]; then | ||
| read -p "Continue with snapshot restore? (y/N) " -n 1 -r | ||
| echo | ||
| if [[ ! $REPLY =~ ^[Yy]$ ]]; then | ||
| exit 1 | ||
| fi | ||
| fi | ||
|
|
||
| exec sudo limine-snapper-restore "$@" | ||
| WRAPPEREOF | ||
|
|
||
| sudo chmod +x "$WRAPPER" | ||
|
|
There was a problem hiding this comment.
This migration creates /usr/local/bin/omarchy-snapshot-restore-safe, but nothing in the repo calls it (the snapshot command still runs limine-snapper-restore directly). Either wire the wrapper into the restore path or drop it; otherwise the migration adds an unused binary that can drift out of date.
| # We need to document and provide a workaround | |
| # Create a wrapper script that warns users about /home | |
| WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe" | |
| cat > "$WRAPPER" << 'WRAPPEREOF' | |
| #!/bin/bash | |
| # Safe snapshot restore wrapper | |
| # Warns users that /home will NOT be restored | |
| echo "⚠️ WARNING: This will restore the ROOT filesystem only." | |
| echo "⚠️ Your /home directory will NOT be affected." | |
| echo "" | |
| echo "To restore a snapshot:" | |
| echo "1. Reboot and select the snapshot from limine menu" | |
| echo "2. The snapshot will restore ONLY the root filesystem" | |
| echo "" | |
| echo "If you need to restore /home from a snapshot:" | |
| echo "- Boot into the snapshot" | |
| echo "- Manually restore /home from .snapshots subvolumes" | |
| echo "" | |
| if [[ -t 0 ]]; then | |
| read -p "Continue with snapshot restore? (y/N) " -n 1 -r | |
| echo | |
| if [[ ! $REPLY =~ ^[Yy]$ ]]; then | |
| exit 1 | |
| fi | |
| fi | |
| exec sudo limine-snapper-restore "$@" | |
| WRAPPEREOF | |
| sudo chmod +x "$WRAPPER" | |
| # Document the current behavior so users know /home is not part of the restore flow |
|
|
||
| # Ensure /boot is mounted with proper permissions | ||
| # Add to fstab if not already present with correct options | ||
| if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then |
There was a problem hiding this comment.
The fstab check is looking for lines that start with /boot, but in fstab the mountpoint is typically the second column (e.g., UUID=... /boot vfat ...). This will warn incorrectly on most systems. Match the mountpoint column instead (e.g., a regex for whitespace + /boot + whitespace).
| if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then | |
| if ! grep -Eq '^[[:space:]]*[^#[:space:]][^[:space:]]*[[:space:]]+/boot([[:space:]]|$)' /etc/fstab 2>/dev/null; then |
| Description=Stop hyprlock before suspend/hibernate | ||
| Before=suspend.target hibernate.target hybrid-suspend.target | ||
| DefaultDependencies=no | ||
| After=hypridle.service |
There was a problem hiding this comment.
The generated unit uses After=hypridle.service, but hypridle is launched via Hyprland autostart in this repo (not as a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective; consider removing it or ordering against an appropriate system target/unit.
| After=hypridle.service |
| [Install] | ||
| WantedBy=suspend.target hibernate.target hybrid-suspend.target | ||
| SYSTEMD | ||
|
|
There was a problem hiding this comment.
After writing a new unit file into /etc/systemd/system, the migration should run systemctl daemon-reload before enabling it so systemd can pick it up immediately. This matches the pattern used by other scripts in the repo when adding services/drop-ins.
| sudo systemctl daemon-reload |
| if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then | ||
| echo "Creating snapper config for /home..." | ||
| sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config" | ||
| fi | ||
| elif [[ -d /home/.snapshots ]]; then | ||
| # /home has .snapshots subdirectory, ensure config exists | ||
| if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then | ||
| echo "Creating snapper config for /home subvolume..." | ||
| sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config" | ||
| fi |
There was a problem hiding this comment.
snapper list-configs | grep -q "home" can match unintended config names (e.g., home-old). Use an anchored match for the config name (as other scripts do with ^home / ^root) to avoid false positives/negatives.
| echo "Applying NVIDIA suspend fix..." | ||
|
|
||
| # The issue is that hyprlock holds DRM/GBM resources during suspend, | ||
| # preventing NVIDIA from entering proper suspend state | ||
|
|
||
| # Check if user is on NVIDIA | ||
| if command -v nvidia-smi &>/dev/null; then | ||
| echo "NVIDIA GPU detected, applying suspend fix..." | ||
|
|
||
| # Create a systemd service to stop hyprlock before suspend |
There was a problem hiding this comment.
This installer script isn’t referenced anywhere in the install flow (no calls from install/config/all.sh or hardware/nvidia.sh found), so the suspend fix won’t actually be applied on new installs. Wire this script into the installer (or move the logic into an existing invoked script) so it runs when NVIDIA is detected.
| Description=Stop hyprlock before suspend/hibernate | ||
| Before=suspend.target hibernate.target hybrid-suspend.target | ||
| DefaultDependencies=no | ||
| After=hypridle.service |
There was a problem hiding this comment.
The generated unit uses After=hypridle.service, but this repo starts hypridle via Hyprland autostart (not a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective and may confuse future debugging; consider removing it or ordering against an appropriate system target/unit.
| After=hypridle.service |
| ExecStart=/usr/bin/pkill -STOP hyprlock | ||
| RemainAfterExit=yes | ||
| ExecStop=/usr/bin/pkill -CONT hyprlock |
There was a problem hiding this comment.
pkill exits non-zero when no matching process is found. As written, that would mark the unit as failed on suspend/resume when hyprlock isn’t running (and could interfere with sleep on some setups). Make these ExecStart/ExecStop commands non-fatal when hyprlock isn’t present (e.g., ignore exit status 1).
| ExecStart=/usr/bin/pkill -STOP hyprlock | |
| RemainAfterExit=yes | |
| ExecStop=/usr/bin/pkill -CONT hyprlock | |
| ExecStart=-/usr/bin/pkill -STOP hyprlock | |
| RemainAfterExit=yes | |
| ExecStop=-/usr/bin/pkill -CONT hyprlock |
System hard-freezes on suspend with hyprlock active on NVIDIA GPUs. No resume possible, requires hard reboot. Root cause: hyprlock holds DRM/GBM resources that prevent NVIDIA driver from properly entering suspend state. Fix: Create a systemd service that stops hyprlock before suspend and resumes it after wake. Fixes: basecamp#5277
d9319bb to
a17e0b7
Compare
|
This PR is superseded by #5422 which combines all fixes with proper Copilot review feedback addressed. Please review that PR instead. |
Summary
System hard-freezes on suspend with hyprlock active on NVIDIA GPUs. No resume possible, requires hard reboot.
Issue
When suspending with hyprlock running:
PM: suspend entry (deep)Root cause: hyprlock holds DRM/GBM context that prevents NVIDIA driver from properly suspending.
Fix
Create a systemd service (
hyprlock-suspend.service) that:pkill -STOP) before entering suspendpkill -CONT) after wakeFiles Changed
install/config/nvidia-suspend-fix.sh- New installer scriptmigrations/1777007503.sh- Migration to fix existing installationsTesting
systemctl suspendFixes: #5277