Skip to content

PureStorage-OpenConnect/proxmox-vvol-migrator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pure vVOL VMware to Proxmox Migrator

Web interface for migrating VMs from VMware vCenter to Proxmox VE with Pure Storage vVol integration.

⚠️ Use At Your Own Risk

THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED.

This tool performs destructive operations including:

  • Shutting down VMware VMs
  • Cloning and deleting storage volumes on Pure Storage arrays
  • Creating and modifying VMs on Proxmox clusters
  • Modifying LVM and udev configurations on Proxmox nodes

Before using this tool:

  • Test thoroughly in a non-production environment first
  • Ensure you have valid backups of all VMs and data before migrating
  • Verify your Pure Storage snapshots and replication are current
  • Understand the migration workflow and potential failure points

The authors and contributors are not responsible for any data loss, downtime, or other damages resulting from the use of this software. Use at your own risk.

Features

  • List VMware VMs with vVol storage
  • One-click migration to Proxmox
  • Automatic vVol cloning on Pure Storage
  • Live disk migration to Proxmox datastore using QMP blockdev-mirror
  • Dynamic LVM and kpartx filtering to prevent device conflicts
  • Automatic cleanup of temporary LUNs
  • Real-time job status and logs
  • MAC address preservation from source VM
  • EFI/BIOS firmware detection and preservation

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                      Web Interface (React port 5173)            │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────────────┐   │
│  │ VM Dashboard │  │   Migrate    │  │   Job Status/Logs    │   │
│  │  (vVol VMs)  │  │    Wizard    │  │                      │   │
│  └──────────────┘  └──────────────┘  └──────────────────────┘   │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                   FastAPI Backend (Port 8000)                   │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────────────┐   │
│  │   vCenter    │  │    Pure      │  │      Proxmox         │   │
│  │   Client     │  │   Client     │  │      Client          │   │
│  └──────────────┘  └──────────────┘  └──────────────────────┘   │
└─────────────────────────────────────────────────────────────────┘
                              │
          ┌───────────────────┼───────────────────┐
          ▼                   ▼                   ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│   VMware vCenter │ │  Pure FlashArray │ │  Proxmox Cluster │
│                  │ │                  │ │  (via SSH)       │
└──────────────────┘ └──────────────────┘ └──────────────────┘

Migration Workflow

  1. User selects VM from vCenter list (only VMs with vVol storage shown)
  2. User configures target Proxmox node/network/storage
  3. Shutdown VMware VM gracefully via vCenter API
  4. Clone vVols on Pure Storage array (instant copy)
  5. Add LVM/kpartx filters on all Proxmox nodes to prevent device conflicts
  6. Attach cloned LUNs to Proxmox host group
  7. SCSI rescan on all Proxmox nodes (installs sg3-utils if needed)
  8. Create Proxmox VM with raw /dev/mapper LUN disks
  9. Power on Proxmox VM
  10. Live disk migration using QMP blockdev-mirror to Proxmox storage (LVM)
  11. Cleanup - remove multipath devices, detach from host group, delete volumes

LVM Filter and kpartx Blocking

During migration, the source VM's disks may contain LVM volume groups or partitions. To prevent the Proxmox host from automatically activating these, the migration process:

  1. Creates LVM filter at /etc/lvm/lvm.conf.d/pure-migrator.conf with global_filter to reject specific WWNs
  2. Creates udev rule at /etc/udev/rules.d/59-migrator-skip-kpartx.rules that sets DM_SUBSYSTEM_UDEV_FLAG1=1 for migration LUNs, telling kpartx to skip creating partition mappings
  3. Removes filters after successful cleanup

This ensures the multipath device has an open count of 1 (only QEMU) during migration, allowing clean removal after the live mirror completes.

Installation

Quick Install (Production)

# On the application server (not a Proxmox node)
sudo ./install.sh

This installs:

  • Backend to /opt/migrator/backend/
  • Frontend to /opt/migrator/frontend/
  • Scripts to /opt/migrator/scripts/
  • Systemd services: migrator-backend, migrator-frontend

Development Setup

This section is only relevant for manual installation or development. The install.sh script will install the backend and frontend for you.

Backend

cd backend
cp .env.example .env
# Edit .env with your credentials

pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Frontend

cd frontend
npm install
npm run dev -- --host 0.0.0.0 --port 5173

Configuration

This section is only relevant for manual configuration. The web interface allows you to configure all of these settings.

Copy backend/.env.example to backend/.env and configure:

vCenter

  • VCENTER_HOST - vCenter hostname or IP
  • VCENTER_USER - vCenter username (e.g., [email protected])
  • VCENTER_PASSWORD - vCenter password

Pure Storage

  • PURE_HOST - FlashArray management IP
  • PURE_API_TOKEN - API token for authentication
  • PURE_PROXMOX_HOST_GROUP - Host group name containing Proxmox nodes

Proxmox

  • PROXMOX_HOST - Proxmox cluster gateway node (e.g., proxmox-01.example.com)
  • PROXMOX_USER - Proxmox API user (e.g., root@pam)
  • PROXMOX_PASSWORD - Proxmox password
  • PROXMOX_SSH_USER - SSH user for remote commands (typically root)

Directory Structure

/opt/migrator/
├── backend/                 # FastAPI backend application
│   ├── app/
│   │   ├── clients/         # vCenter, Pure, Proxmox API clients
│   │   ├── services/        # Migration orchestration
│   │   └── main.py
│   └── requirements.txt
├── frontend/                # React frontend
│   └── dist/               # Production build
├── scripts/                # Utility scripts deployed to Proxmox nodes
│   ├── lvm_filter.sh       # Dynamic LVM filter and kpartx management
│   └── live_mirror.py      # QMP-based live disk migration
└── migrator.db             # SQLite database

API Endpoints

  • POST /api/auth/token - Login
  • POST /api/auth/register - Initial user setup
  • GET /api/vcenter/vms - List VMs with vVols
  • GET /api/proxmox/nodes - List Proxmox nodes
  • GET /api/proxmox/networks - List available networks/bridges
  • GET /api/proxmox/storage - List available storage
  • POST /api/migrations/ - Start migration
  • GET /api/migrations/ - List migrations
  • GET /api/migrations/{id} - Get migration details
  • GET /api/migrations/{id}/logs - Get migration logs
  • POST /api/migrations/{id}/cancel - Cancel migration
  • POST /api/migrations/{id}/restart - Restart failed migration
  • POST /api/migrations/{id}/cleanup - Manual cleanup

Requirements

  • Python 3.10+
  • Node.js 18+
  • VMware vCenter 8.x or 9.x
  • Pure Storage FlashArray (with py-pure-client SDK)
  • Proxmox VE 8.x
  • SSH access from app server to Proxmox nodes
  • sg3-utils on Proxmox nodes (auto-installed if missing)

Troubleshooting

Device still in use after migration

If cleanup fails with "device has open count: X", check:

  1. kpartx partitions: ls /dev/mapper/ | grep <wwn> - should show no -part1, -part2 suffixes
  2. LVM VGs activated: vgs - should not show VGs from the source VM
  3. Udev rule present: cat /etc/udev/rules.d/59-migrator-skip-kpartx.rules

Manual cleanup:

# Remove partition mappings
kpartx -d /dev/mapper/<wwn>
# Deactivate any LVM VGs
vgchange -an <vg-name>
# Remove multipath device
multipath -f <wwn>

SCSI rescan fails

If rescan-scsi-bus.sh is not found, the backend will attempt to install sg3-utils automatically. If that fails, manually install:

apt-get install -y sg3-utils

Live mirror not starting

Check that the QMP socket exists:

ls -la /run/qemu-server/<vmid>.qmp

Verify the VM is running on the expected node:

qm status <vmid>

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published