How to Connect Two Proxmox Servers: Clustering, Migration, and Multi-Node Setups Explained
Connecting two Proxmox servers unlocks a significant leap in capability — live VM migration, high availability, shared storage, and centralized management from a single interface. But "connecting" can mean different things depending on what you're trying to accomplish, and the method you choose shapes everything downstream.
What Does "Connecting" Two Proxmox Servers Actually Mean?
There are a few distinct scenarios people refer to when they talk about connecting Proxmox servers:
- Forming a Proxmox cluster — both nodes join a single management plane via the Proxmox Cluster File System (pmxcfs)
- Setting up shared or replicated storage — so VMs and containers can move between nodes
- Configuring a dedicated cluster or migration network — a separate network link used for replication traffic, corosync heartbeats, or live migration
- Standalone replication — syncing VM disk data between two nodes without full clustering
Each approach has different requirements, trade-offs, and appropriate use cases.
Forming a Two-Node Proxmox Cluster 🖧
The most common method is creating a Proxmox VE cluster using the built-in cluster management tools. Proxmox uses Corosync for cluster communication and pmxcfs as a distributed configuration file system across all nodes.
Step-by-Step Overview
Create the cluster on the first node — In the Proxmox web UI, navigate to Datacenter → Cluster → Create Cluster. Give it a name and select the cluster network interface (ideally a dedicated NIC or VLAN).
Join the second node — On the second Proxmox server, go to Datacenter → Cluster → Join Cluster. You'll need the join information token generated by the first node. This includes the cluster's fingerprint and IP.
Verify connectivity — Both nodes must be able to reach each other over the cluster network. Port 5405 UDP (Corosync) must be open and reachable between them.
Confirm cluster membership — Run
pvecm statusfrom the shell on either node to verify quorum and node visibility.
Once joined, both nodes appear in the same Proxmox web interface, and you can manage VMs, storage, and networking from either node.
The Quorum Problem with Two Nodes ⚠️
Two-node clusters have a fundamental quorum challenge. Corosync requires a majority of nodes to agree before making cluster decisions — with only two nodes, losing one immediately breaks quorum, potentially freezing VMs even when the remaining node is healthy.
The standard solution is adding a QDevice (quorum device), which is a lightweight quorum tie-breaker running on a third machine — it doesn't need to be a full Proxmox server. A small VM, a Raspberry Pi, or even a cloud instance running the corosync-qnetd package can serve this role.
Without a QDevice, two-node clusters work but require manually setting expected_votes or disabling quorum enforcement, which carries its own risks in production environments.
Networking: Why a Dedicated Cluster Network Matters
Running cluster traffic over the same network as your regular VM traffic introduces latency and potential instability. Best practice is to use a dedicated network interface or at minimum a separate VLAN for:
- Corosync heartbeat traffic — frequent, low-latency communication that confirms each node is alive
- Storage replication — if using Proxmox's built-in ZFS replication or Ceph, this traffic can be substantial
- Live migration — moving RAM contents between nodes in real time
| Traffic Type | Recommended Bandwidth | Latency Sensitivity |
|---|---|---|
| Corosync heartbeat | Low (< 1 Mbps typical) | Very high |
| Live VM migration | High (10 GbE ideal) | Moderate |
| Storage replication | High (depends on change rate) | Moderate |
| VM/container traffic | Varies | Low to moderate |
Even a direct-connect cable between two servers using a second NIC on each gives you a clean, isolated path for cluster internals.
Shared Storage: The Key to VM Mobility
Connecting two Proxmox nodes in a cluster doesn't automatically let you move VMs between them — the VM disk images must be on shared storage that both nodes can access simultaneously, or you need to replicate them.
Common shared storage options:
- NFS or SMB — simple to set up, performance depends on the file server and network
- iSCSI — block-level storage shared over the network, useful with SAN hardware
- Ceph — distributed, software-defined storage that Proxmox integrates with natively; runs across the nodes themselves
- ZFS replication — not truly shared, but Proxmox can replicate VM disks from one node to another on a schedule; migration requires a brief downtime window
Without shared storage, live migration isn't possible. Scheduled replication with ZFS gives you a fallback but not zero-downtime movement.
Proxmox Replication Without Full Clustering
If you want two standalone Proxmox servers to stay in sync — without forming a cluster — Proxmox Replication (under Datacenter → Replication in the UI) can replicate ZFS-backed VMs to a remote node on a defined schedule (as frequently as every 15 minutes). This is common for:
- Disaster recovery setups where a second site acts as a cold standby
- Home labs where quorum complexity isn't worth it
- Environments where the two nodes have high-latency links between them
The nodes remain independent in this model — no shared management plane, no live migration — but disk state stays synchronized.
Variables That Determine Which Approach Fits
Several factors meaningfully change which connection method makes sense:
- Network infrastructure — do you have 10 GbE links, dedicated NICs, or just a shared 1 GbE switch?
- Storage backend — ZFS, LVM-thin, directory, or Ceph each have different replication and sharing capabilities
- Uptime requirements — production workloads needing HA behave very differently from lab or dev environments
- Physical location — same rack, same building, or geographically separate sites all change latency and replication viability
- Technical familiarity — Ceph is powerful but adds significant operational complexity; NFS is simpler but introduces a single point of failure
A two-node lab cluster with ZFS replication and a QDevice on a spare VM is a very different beast from a two-node production cluster with Ceph and 25 GbE interconnects — and both are valid depending on what you're building.