Running LXC containers with NFS storage from a Synology NAS is a common homelab setup - but it comes with some tricky gotchas. Here’s how I diagnosed and fixed persistent container startup failures in my Proxmox environment.
The Problem
My LXC containers started failing to boot after a Proxmox host reboot. The symptoms:
- Containers would hang during startup
pct start <VMID>would timeout- System logs showed NFS mount failures
- Some containers also had memory allocation errors
Sound familiar? Let’s dig into the root causes.
Root Cause #1: NFS Mount Race Condition
The Issue: LXC containers try to mount NFS shares before the network is fully ready, or before the NAS has finished booting.
When Proxmox boots:
- Network services start
- LXC containers begin initialization
- But - your Synology NAS might still be booting
- Container tries to mount NFS → connection refused
- Container startup fails
The Fix: Add proper mount options and delays
Edit your container’s mount point configuration in /etc/pve/lxc/<VMID>.conf:
mp0: /mnt/pve/synology-nfs/data,mp=/mnt/data,backup=0
Update NFS mount options on the Proxmox host (/etc/fstab or Datacenter → Storage):
192.168.1.100:/volume1/data /mnt/pve/synology-nfs nfs _netdev,soft,timeo=30,retrans=3 0 0
Key options:
_netdev- Wait for network before mountingsoft- Don’t hang indefinitely if NFS is unavailabletimeo=30- 30 second timeout (adjust based on your NAS boot time)retrans=3- Retry 3 times before failing
Root Cause #2: Incorrect NFS Permissions
The Issue: Even if NFS mounts succeed, containers can’t write to the share due to UID/GID mismatches.
Synology NFS exports have default permissions that might not match your container’s user IDs.
The Fix: Configure proper NFS export permissions
On your Synology NAS:
- Go to Control Panel → Shared Folder
- Select your NFS share → Edit → NFS Permissions
- Configure:
Hostname/IP: 192.168.1.0/24 (your Proxmox network) Privilege: Read/Write Squash: Map all users to admin Security: sys Enable asynchronous: ✓
For unprivileged LXC containers, you may need to map UIDs. In /etc/pve/lxc/<VMID>.conf:
# Map container root (UID 0) to host UID 100000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536
Root Cause #3: Memory Allocation Issues
The Issue: After updating applications (like Nginx Proxy Manager), containers fail to start due to insufficient memory allocation.
Error in container logs:
FATAL: kernel too old
FATAL: cannot allocate memory
The Fix: Increase memory allocation
Edit /etc/pve/lxc/<VMID>.conf:
# Before
memory: 512
# After
memory: 2048
swap: 512
For containers running reverse proxies or databases, I recommend:
- Nginx Proxy Manager: 2GB RAM minimum
- Basic services: 512MB - 1GB
- Docker-in-LXC: 4GB+
Debugging Container Startup Issues
When a container won’t start, here’s my troubleshooting workflow:
1. Check Container Logs
# On Proxmox host
pct start <VMID>
journalctl -u pve-container@<VMID> -f
2. Verify NFS Mounts on Host
# Check if NFS is mounted on Proxmox host
df -h | grep nfs
mount | grep nfs
# Test NFS connectivity
showmount -e 192.168.1.100
# Try manual mount
mount -t nfs 192.168.1.100:/volume1/data /mnt/test
3. Check Container Configuration
# View container config
cat /etc/pve/lxc/<VMID>.conf
# Look for:
# - Memory allocation
# - Mount points
# - Network settings
4. Start Container in Debug Mode
# Start without mounting (to isolate NFS issues)
pct start <VMID> --skiplock
Automated Recovery Script
I created a simple script to ensure containers start in the right order after host reboot:
#!/bin/bash
# /usr/local/bin/start-containers.sh
# Wait for network and NFS
sleep 60
# Start containers in order (critical services first)
for VMID in 100 101 102 103; do
echo "Starting container $VMID..."
pct start $VMID
sleep 10 # Give each container time to initialize
done
Make it executable and add to crontab:
chmod +x /usr/local/bin/start-containers.sh
# Add to root crontab
crontab -e
@reboot /usr/local/bin/start-containers.sh
Best Practices for LXC + NFS
Based on months of running this setup:
- Use dedicated NFS shares - Don’t mix LXC storage with other data
- Monitor NFS performance - Add Prometheus exporters for visibility
- Keep NAS and Proxmox on same network segment - Reduces latency
- Use unprivileged containers when possible - Better security, but requires UID mapping
- Document your UID mappings - You’ll forget them, trust me
- Test failover scenarios - Reboot your NAS, test how containers handle it
Alternative: Local Storage + NFS for Data
For critical containers, consider:
- Root filesystem → Local Proxmox storage (fast, reliable)
- Data volumes → NFS mounts (scalable, shared)
Example configuration:
# Container root on local storage
rootfs: local-lvm:vm-100-disk-0,size=8G
# Application data on NFS
mp0: /mnt/pve/synology-nfs/appdata,mp=/var/lib/appdata
This way, containers can start even if NFS is temporarily unavailable.
Conclusion
Proxmox LXC containers with NFS storage are powerful but require careful configuration. The key takeaways:
- Add
_netdevand proper timeout options to NFS mounts - Configure Synology NFS permissions correctly
- Allocate sufficient memory for your workloads
- Implement startup delays and dependency management
- Always test your setup after making changes
Have you encountered different NFS + LXC issues? Found better solutions? I’d love to hear about them - reach out via GitHub!
Related Resources
- Proxmox LXC Documentation
- NFS Best Practices
- My guide on setting up Immich with Authentik (coming soon)