Web interface for migrating VMs from VMware vCenter to Proxmox VE with Pure Storage vVol integration.
THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED.
This tool performs destructive operations including:
- Shutting down VMware VMs
- Cloning and deleting storage volumes on Pure Storage arrays
- Creating and modifying VMs on Proxmox clusters
- Modifying LVM and udev configurations on Proxmox nodes
Before using this tool:
- Test thoroughly in a non-production environment first
- Ensure you have valid backups of all VMs and data before migrating
- Verify your Pure Storage snapshots and replication are current
- Understand the migration workflow and potential failure points
The authors and contributors are not responsible for any data loss, downtime, or other damages resulting from the use of this software. Use at your own risk.
- List VMware VMs with vVol storage
- One-click migration to Proxmox
- Automatic vVol cloning on Pure Storage
- Live disk migration to Proxmox datastore using QMP blockdev-mirror
- Dynamic LVM and kpartx filtering to prevent device conflicts
- Automatic cleanup of temporary LUNs
- Real-time job status and logs
- MAC address preservation from source VM
- EFI/BIOS firmware detection and preservation
┌─────────────────────────────────────────────────────────────────┐
│ Web Interface (React port 5173) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐ │
│ │ VM Dashboard │ │ Migrate │ │ Job Status/Logs │ │
│ │ (vVol VMs) │ │ Wizard │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ FastAPI Backend (Port 8000) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐ │
│ │ vCenter │ │ Pure │ │ Proxmox │ │
│ │ Client │ │ Client │ │ Client │ │
│ └──────────────┘ └──────────────┘ └──────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ VMware vCenter │ │ Pure FlashArray │ │ Proxmox Cluster │
│ │ │ │ │ (via SSH) │
└──────────────────┘ └──────────────────┘ └──────────────────┘
- User selects VM from vCenter list (only VMs with vVol storage shown)
- User configures target Proxmox node/network/storage
- Shutdown VMware VM gracefully via vCenter API
- Clone vVols on Pure Storage array (instant copy)
- Add LVM/kpartx filters on all Proxmox nodes to prevent device conflicts
- Attach cloned LUNs to Proxmox host group
- SCSI rescan on all Proxmox nodes (installs sg3-utils if needed)
- Create Proxmox VM with raw /dev/mapper LUN disks
- Power on Proxmox VM
- Live disk migration using QMP blockdev-mirror to Proxmox storage (LVM)
- Cleanup - remove multipath devices, detach from host group, delete volumes
During migration, the source VM's disks may contain LVM volume groups or partitions. To prevent the Proxmox host from automatically activating these, the migration process:
- Creates LVM filter at
/etc/lvm/lvm.conf.d/pure-migrator.confwithglobal_filterto reject specific WWNs - Creates udev rule at
/etc/udev/rules.d/59-migrator-skip-kpartx.rulesthat setsDM_SUBSYSTEM_UDEV_FLAG1=1for migration LUNs, telling kpartx to skip creating partition mappings - Removes filters after successful cleanup
This ensures the multipath device has an open count of 1 (only QEMU) during migration, allowing clean removal after the live mirror completes.
# On the application server (not a Proxmox node)
sudo ./install.shThis installs:
- Backend to
/opt/migrator/backend/ - Frontend to
/opt/migrator/frontend/ - Scripts to
/opt/migrator/scripts/ - Systemd services:
migrator-backend,migrator-frontend
This section is only relevant for manual installation or development. The install.sh script will install the backend and frontend for you.
cd backend
cp .env.example .env
# Edit .env with your credentials
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000cd frontend
npm install
npm run dev -- --host 0.0.0.0 --port 5173This section is only relevant for manual configuration. The web interface allows you to configure all of these settings.
Copy backend/.env.example to backend/.env and configure:
VCENTER_HOST- vCenter hostname or IPVCENTER_USER- vCenter username (e.g., [email protected])VCENTER_PASSWORD- vCenter password
PURE_HOST- FlashArray management IPPURE_API_TOKEN- API token for authenticationPURE_PROXMOX_HOST_GROUP- Host group name containing Proxmox nodes
PROXMOX_HOST- Proxmox cluster gateway node (e.g., proxmox-01.example.com)PROXMOX_USER- Proxmox API user (e.g., root@pam)PROXMOX_PASSWORD- Proxmox passwordPROXMOX_SSH_USER- SSH user for remote commands (typically root)
/opt/migrator/
├── backend/ # FastAPI backend application
│ ├── app/
│ │ ├── clients/ # vCenter, Pure, Proxmox API clients
│ │ ├── services/ # Migration orchestration
│ │ └── main.py
│ └── requirements.txt
├── frontend/ # React frontend
│ └── dist/ # Production build
├── scripts/ # Utility scripts deployed to Proxmox nodes
│ ├── lvm_filter.sh # Dynamic LVM filter and kpartx management
│ └── live_mirror.py # QMP-based live disk migration
└── migrator.db # SQLite database
POST /api/auth/token- LoginPOST /api/auth/register- Initial user setupGET /api/vcenter/vms- List VMs with vVolsGET /api/proxmox/nodes- List Proxmox nodesGET /api/proxmox/networks- List available networks/bridgesGET /api/proxmox/storage- List available storagePOST /api/migrations/- Start migrationGET /api/migrations/- List migrationsGET /api/migrations/{id}- Get migration detailsGET /api/migrations/{id}/logs- Get migration logsPOST /api/migrations/{id}/cancel- Cancel migrationPOST /api/migrations/{id}/restart- Restart failed migrationPOST /api/migrations/{id}/cleanup- Manual cleanup
- Python 3.10+
- Node.js 18+
- VMware vCenter 8.x or 9.x
- Pure Storage FlashArray (with py-pure-client SDK)
- Proxmox VE 8.x
- SSH access from app server to Proxmox nodes
- sg3-utils on Proxmox nodes (auto-installed if missing)
If cleanup fails with "device has open count: X", check:
- kpartx partitions:
ls /dev/mapper/ | grep <wwn>- should show no-part1,-part2suffixes - LVM VGs activated:
vgs- should not show VGs from the source VM - Udev rule present:
cat /etc/udev/rules.d/59-migrator-skip-kpartx.rules
Manual cleanup:
# Remove partition mappings
kpartx -d /dev/mapper/<wwn>
# Deactivate any LVM VGs
vgchange -an <vg-name>
# Remove multipath device
multipath -f <wwn>If rescan-scsi-bus.sh is not found, the backend will attempt to install sg3-utils automatically. If that fails, manually install:
apt-get install -y sg3-utilsCheck that the QMP socket exists:
ls -la /run/qemu-server/<vmid>.qmpVerify the VM is running on the expected node:
qm status <vmid>