Skip to content

Fix NVIDIA + hyprlock system freeze on suspend#5421

Closed
kuro-toji wants to merge 4 commits intobasecamp:devfrom
kuro-toji:fix/nvidia-hyprlock-suspend-freeze
Closed

Fix NVIDIA + hyprlock system freeze on suspend#5421
kuro-toji wants to merge 4 commits intobasecamp:devfrom
kuro-toji:fix/nvidia-hyprlock-suspend-freeze

Conversation

@kuro-toji
Copy link
Copy Markdown

Summary

System hard-freezes on suspend with hyprlock active on NVIDIA GPUs. No resume possible, requires hard reboot.

Issue

When suspending with hyprlock running:

  1. NVIDIA suspend service completes successfully
  2. System enters deep suspend
  3. Resume fails - no journal entries after PM: suspend entry (deep)
  4. Hard power cycle required

Root cause: hyprlock holds DRM/GBM context that prevents NVIDIA driver from properly suspending.

Fix

Create a systemd service (hyprlock-suspend.service) that:

  • Stops hyprlock (pkill -STOP) before entering suspend
  • Resumes hyprlock (pkill -CONT) after wake
  • Runs before suspend.target, hibernate.target, hybrid-suspend.target

Files Changed

  • install/config/nvidia-suspend-fix.sh - New installer script
  • migrations/1777007503.sh - Migration to fix existing installations

Testing

  1. Install on NVIDIA system
  2. Suspend using systemctl suspend
  3. System should wake properly
  4. hyprlock should resume after wake

Fixes: #5277

Assistant added 3 commits April 24, 2026 11:33
The /boot mount point and random-seed file were world accessible,
which is a security issue per bootctl warnings.

This fix:
- Sets /boot directory permissions to 700
- Sets random-seed file permissions to 600
- Runs bootctl random-seed to regenerate with correct permissions

Fixes: basecamp#5377
On chroot installations, the snapper /home config wasn't being created,
leading to silent failures and disk space issues as snapshot subvolumes
kept growing without cleanup policies.

This fix ensures:
- /home snapper config is created when /home is on btrfs
- Root snapper config is verified to exist
- Config is copied from defaults with appropriate modifications

Fixes: basecamp#5344
Users reported that omarchy-snapshot restore was also restoring /home,
causing data loss of user files.

This fix:
- Updates omarchy-snapshot to show clear warnings about /home exclusion
- Documents that /home is NOT restored during snapshot operations
- Creates a safe wrapper script for snapshot restore

The root cause may be in limine-snapper-restore, but until that's fixed,
this provides user awareness and prevents accidental data loss.

Fixes: basecamp#5361
Copilot AI review requested due to automatic review settings April 24, 2026 06:06
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Tip

If you aren't ready for review, convert to a draft PR.
Click "Convert to draft" or run gh pr ready --undo.
Click "Ready for review" or run gh pr ready to reengage.

Adds a systemd sleep hook intended to prevent NVIDIA suspend/resume freezes caused by hyprlock, plus several additional migrations/installer scripts related to snapper restore behavior and /boot permissions.

Changes:

  • Add hyprlock-suspend.service generation (STOP/CONT hyprlock) via installer script + migration.
  • Add migrations/scripts for snapper (/home config + restore warnings) and /boot permission hardening.
  • Update omarchy-snapshot restore to print warnings about what gets restored.

Reviewed changes

Copilot reviewed 7 out of 8 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
migrations/1777007503.sh Creates/enables hyprlock-suspend.service on existing installs.
install/config/nvidia-suspend-fix.sh Adds installer-time creation of hyprlock-suspend.service (currently not wired into install flow).
bin/omarchy-snapshot Adds restore-time messaging about root-only restores.
migrations/1777007502.sh Attempts to add a “safe restore” wrapper + messaging (currently has permission/usage issues).
migrations/1777007501.sh Recreates snapper /home config (conflicts with prior “drop /home snapshots” direction).
install/config/snapper-home-config.sh Installer helper for ensuring snapper /home config exists (currently not wired into install flow).
migrations/1777007500.sh Migration to harden /boot permissions (includes an unguarded desktop notification).
install/config/boot-permissions-fix.sh Installer helper for /boot permission hardening (currently not wired into install flow).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread migrations/1777007501.sh
Comment on lines +3 to +30
# Fix snapper /home config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344

echo "Fixing snapper /home config..."

# Check if /home is on btrfs and has .snapshots
if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then
# Check if /home snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"

# Copy default config
if [[ -f /etc/snapper/configs/root ]]; then
sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true
# Modify for /home - don't create timeline snapshots
sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true
sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true
fi

echo "✓ Created snapper /home config"
else
echo "Snapper /home config already exists"
fi
else
echo "/home is not on btrfs or separate subvolume, skipping"
fi

Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This migration recreates a snapper home config, but the repository already has a migration explicitly dropping /home snapshots/config for performance and to avoid rolling back user data (see migrations/1776927490.sh). Because bin/omarchy-snapshot create iterates over all snapper configs, reintroducing home here will bring back /home snapshots and may contradict restore expectations. Consider removing this migration or ensuring omarchy-snapshot ignores home configs if the new behavior is intentional.

Suggested change
# Fix snapper /home config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344
echo "Fixing snapper /home config..."
# Check if /home is on btrfs and has .snapshots
if [[ -d /home/.snapshots ]] || mountpoint -q /home 2>/dev/null; then
# Check if /home snapper config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "^home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
# Copy default config
if [[ -f /etc/snapper/configs/root ]]; then
sudo cp /etc/snapper/configs/root /etc/snapper/configs/home 2>/dev/null || true
# Modify for /home - don't create timeline snapshots
sudo sed -i 's|SUBVOLUME="/"|SUBVOLUME="/home"|' /etc/snapper/configs/home 2>/dev/null || true
sudo sed -i 's|TIMELINE_CREATE="yes"|TIMELINE_CREATE="no"|' /etc/snapper/configs/home 2>/dev/null || true
fi
echo "✓ Created snapper /home config"
else
echo "Snapper /home config already exists"
fi
else
echo "/home is not on btrfs or separate subvolume, skipping"
fi
# Fix snapper config for chroot installations
# See: https://github.com/basecamp/omarchy/issues/5344
echo "Fixing snapper config..."
echo "Skipping /home snapper config creation; /home snapshots are intentionally disabled"

Copilot uses AI. Check for mistakes.
Comment thread migrations/1777007500.sh
Comment on lines +1 to +6
#!/bin/bash

# Fix /boot permissions security issue
# See: https://github.com/basecamp/omarchy/issues/5377

echo "Fixing /boot permissions for better security..."
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR title/description are focused on the NVIDIA + hyprlock suspend freeze, but this PR also adds snapper-related migrations/scripts and a /boot permissions fix. That’s a significant scope expansion; either update the PR description to cover these additional changes (and their rationale) or split them into separate PRs to reduce review/rollback risk.

Copilot uses AI. Check for mistakes.
Comment thread migrations/1777007502.sh
Comment on lines +9 to +42
# We need to document and provide a workaround

# Create a wrapper script that warns users about /home
WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe"
cat > "$WRAPPER" << 'WRAPPEREOF'
#!/bin/bash
# Safe snapshot restore wrapper
# Warns users that /home will NOT be restored

echo "⚠️ WARNING: This will restore the ROOT filesystem only."
echo "⚠️ Your /home directory will NOT be affected."
echo ""
echo "To restore a snapshot:"
echo "1. Reboot and select the snapshot from limine menu"
echo "2. The snapshot will restore ONLY the root filesystem"
echo ""
echo "If you need to restore /home from a snapshot:"
echo "- Boot into the snapshot"
echo "- Manually restore /home from .snapshots subvolumes"
echo ""

if [[ -t 0 ]]; then
read -p "Continue with snapshot restore? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi

exec sudo limine-snapper-restore "$@"
WRAPPEREOF

sudo chmod +x "$WRAPPER"

Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This migration creates /usr/local/bin/omarchy-snapshot-restore-safe, but nothing in the repo calls it (the snapshot command still runs limine-snapper-restore directly). Either wire the wrapper into the restore path or drop it; otherwise the migration adds an unused binary that can drift out of date.

Suggested change
# We need to document and provide a workaround
# Create a wrapper script that warns users about /home
WRAPPER="/usr/local/bin/omarchy-snapshot-restore-safe"
cat > "$WRAPPER" << 'WRAPPEREOF'
#!/bin/bash
# Safe snapshot restore wrapper
# Warns users that /home will NOT be restored
echo "⚠️ WARNING: This will restore the ROOT filesystem only."
echo "⚠️ Your /home directory will NOT be affected."
echo ""
echo "To restore a snapshot:"
echo "1. Reboot and select the snapshot from limine menu"
echo "2. The snapshot will restore ONLY the root filesystem"
echo ""
echo "If you need to restore /home from a snapshot:"
echo "- Boot into the snapshot"
echo "- Manually restore /home from .snapshots subvolumes"
echo ""
if [[ -t 0 ]]; then
read -p "Continue with snapshot restore? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
exec sudo limine-snapper-restore "$@"
WRAPPEREOF
sudo chmod +x "$WRAPPER"
# Document the current behavior so users know /home is not part of the restore flow

Copilot uses AI. Check for mistakes.

# Ensure /boot is mounted with proper permissions
# Add to fstab if not already present with correct options
if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fstab check is looking for lines that start with /boot, but in fstab the mountpoint is typically the second column (e.g., UUID=... /boot vfat ...). This will warn incorrectly on most systems. Match the mountpoint column instead (e.g., a regex for whitespace + /boot + whitespace).

Suggested change
if ! grep -q "^/boot" /etc/fstab 2>/dev/null; then
if ! grep -Eq '^[[:space:]]*[^#[:space:]][^[:space:]]*[[:space:]]+/boot([[:space:]]|$)' /etc/fstab 2>/dev/null; then

Copilot uses AI. Check for mistakes.
Comment thread migrations/1777007503.sh
Description=Stop hyprlock before suspend/hibernate
Before=suspend.target hibernate.target hybrid-suspend.target
DefaultDependencies=no
After=hypridle.service
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated unit uses After=hypridle.service, but hypridle is launched via Hyprland autostart in this repo (not as a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective; consider removing it or ordering against an appropriate system target/unit.

Suggested change
After=hypridle.service

Copilot uses AI. Check for mistakes.
Comment thread migrations/1777007503.sh
[Install]
WantedBy=suspend.target hibernate.target hybrid-suspend.target
SYSTEMD

Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After writing a new unit file into /etc/systemd/system, the migration should run systemctl daemon-reload before enabling it so systemd can pick it up immediately. This matches the pattern used by other scripts in the repo when adding services/drop-ins.

Suggested change
sudo systemctl daemon-reload

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +20
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
elif [[ -d /home/.snapshots ]]; then
# /home has .snapshots subdirectory, ensure config exists
if ! sudo snapper list-configs 2>/dev/null | grep -q "home"; then
echo "Creating snapper config for /home subvolume..."
sudo snapper -c home create-config /home 2>/dev/null || echo "Warning: Could not create /home snapper config"
fi
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

snapper list-configs | grep -q "home" can match unintended config names (e.g., home-old). Use an anchored match for the config name (as other scripts do with ^home / ^root) to avoid false positives/negatives.

Copilot uses AI. Check for mistakes.
Comment on lines +6 to +15
echo "Applying NVIDIA suspend fix..."

# The issue is that hyprlock holds DRM/GBM resources during suspend,
# preventing NVIDIA from entering proper suspend state

# Check if user is on NVIDIA
if command -v nvidia-smi &>/dev/null; then
echo "NVIDIA GPU detected, applying suspend fix..."

# Create a systemd service to stop hyprlock before suspend
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This installer script isn’t referenced anywhere in the install flow (no calls from install/config/all.sh or hardware/nvidia.sh found), so the suspend fix won’t actually be applied on new installs. Wire this script into the installer (or move the logic into an existing invoked script) so it runs when NVIDIA is detected.

Copilot uses AI. Check for mistakes.
Description=Stop hyprlock before suspend/hibernate
Before=suspend.target hibernate.target hybrid-suspend.target
DefaultDependencies=no
After=hypridle.service
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated unit uses After=hypridle.service, but this repo starts hypridle via Hyprland autostart (not a system unit), and there’s no system hypridle.service to order against. This dependency is ineffective and may confuse future debugging; consider removing it or ordering against an appropriate system target/unit.

Suggested change
After=hypridle.service

Copilot uses AI. Check for mistakes.
Comment on lines +25 to +27
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pkill exits non-zero when no matching process is found. As written, that would mark the unit as failed on suspend/resume when hyprlock isn’t running (and could interfere with sleep on some setups). Make these ExecStart/ExecStop commands non-fatal when hyprlock isn’t present (e.g., ignore exit status 1).

Suggested change
ExecStart=/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=/usr/bin/pkill -CONT hyprlock
ExecStart=-/usr/bin/pkill -STOP hyprlock
RemainAfterExit=yes
ExecStop=-/usr/bin/pkill -CONT hyprlock

Copilot uses AI. Check for mistakes.
System hard-freezes on suspend with hyprlock active on NVIDIA GPUs.
No resume possible, requires hard reboot.

Root cause: hyprlock holds DRM/GBM resources that prevent NVIDIA
driver from properly entering suspend state.

Fix: Create a systemd service that stops hyprlock before suspend
and resumes it after wake.

Fixes: basecamp#5277
@kuro-toji kuro-toji force-pushed the fix/nvidia-hyprlock-suspend-freeze branch from d9319bb to a17e0b7 Compare April 24, 2026 06:18
@kuro-toji
Copy link
Copy Markdown
Author

This PR is superseded by #5422 which combines all fixes with proper Copilot review feedback addressed. Please review that PR instead.

@kuro-toji kuro-toji closed this Apr 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

System hard-freeze on suspend with hyprlock active — no resume, requires hard reboot

2 participants