Skip to content

Blog

virt-v2v Windows VM migration pre-requisite

This article explains the prerequisites for migrating a Windows2019 VM from VMware cloud to OpenStack. These are additional requirements that needs to be setup before completing a Windows VM migration. If you do not complete it, you may see following error while completing a Windows VM migration.

virt-v2v: error: One of rhsrvany.exe or pvvxsvc.exe is missing in /usr/share/virt-tools.
One of them is required in order to install Windows firstboot scripts.
You can get one by building rhsrvany (https://github.com/rwmjones/rhsrvany)

Pre-requisite

Environment

Kindly refer below details which is used in this documentation. IP can be different in your environment.

  • virt-v2v Virtual appliance - 192.168.11.11

Steps

Perform all below listed steps on virtual appliance to confiure it to be able to migrate windows VM.

Download virtio iso image

Download virtio iso image on virtual appliance. virtio image version can differ for different windows OS flavor so please check virtio compatibility matrix before downloading. Here I am using version 0.1.248 which is compatible with windows19.

wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.248-1/virtio-win-0.1.248.iso

Mount downloaded image to /mnt directory

Mount iso image to /mnt directory.

mount virtio-win-0.1.248.iso /mnt

Create virtio-win directory

Create virtio-win directory under /usr/share

mkdir /usr/share/virtio-win

Copy image data

Switch to /mnt directory and copy data to /usr/share/virtio-win.

cd /mnt
cp -rpv * /usr/share/virtio-win/
cd
umount /mnt
ls -l /usr/share/virtio-win/ 
Check if you can see below files under /usr/share/virtio-win/

Example output

total 749040
dr-xr-xr-x 16 root root      4096 Feb 28  2024 Balloon
dr-xr-xr-x 16 root root      4096 Feb 28  2024 NetKVM
dr-xr-xr-x 13 root root      4096 Feb 28  2024 amd64
dr-xr-xr-x  2 root root      4096 Feb 28  2024 cert
dr-xr-xr-x  2 root root      4096 Feb 28  2024 data
dr-xr-xr-x 11 root root      4096 Feb 28  2024 fwcfg
dr-xr-xr-x  2 root root      4096 Feb 28  2024 guest-agent
dr-xr-xr-x  6 root root      4096 Feb 28  2024 i386
dr-xr-xr-x 14 root root      4096 Feb 28  2024 pvpanic
dr-xr-xr-x  7 root root      4096 Feb 28  2024 qemufwcfg
dr-xr-xr-x 14 root root      4096 Feb 28  2024 qemupciserial
dr-xr-xr-x  5 root root      4096 Feb 28  2024 qxl
dr-xr-xr-x  9 root root      4096 Feb 28  2024 qxldod
dr-xr-xr-x  5 root root      4096 Feb 28  2024 smbus
dr-xr-xr-x 11 root root      4096 Feb 28  2024 sriov
dr-xr-xr-x 11 root root      4096 Feb 28  2024 viofs
dr-xr-xr-x 11 root root      4096 Feb 28  2024 viogpudo
dr-xr-xr-x 13 root root      4096 Feb 28  2024 vioinput
dr-xr-xr-x 14 root root      4096 Feb 28  2024 viorng
dr-xr-xr-x 14 root root      4096 Feb 28  2024 vioscsi
dr-xr-xr-x 16 root root      4096 Feb 28  2024 vioserial
dr-xr-xr-x 16 root root      4096 Feb 28  2024 viostor
-r-xr-xr-x  1 root root   4539904 Feb 28  2024 virtio-win-gt-x64.msi
-r-xr-xr-x  1 root root   2573312 Feb 28  2024 virtio-win-gt-x86.msi
-r-xr-xr-x  1 root root  27447293 Feb 28  2024 virtio-win-guest-tools.exe
-r-xr-xr-x  1 root root      1598 Feb 28  2024 virtio-win_license.txt

Install additional packages

Run below commands to install few more packages needed for windows migration.

apt install -y rpm2cpio
wget -nd -O srvany.rpm https://kojipkgs.fedoraproject.org//packages/mingw-srvany/1.1/4.fc38/noarch/mingw32-srvany-1.1-4.fc38.noarch.rpm
rpm2cpio srvany.rpm | cpio -idmv \
  && mkdir /usr/share/virt-tools \
  && mv ./usr/i686-w64-mingw32/sys-root/mingw/bin/*exe /usr/share/virt-tools/
Appliance is ready to perform windows vm migration. Switch back to doc VMware to OpenStack Migration using virt-v2v for further steps to perform actual disk migration.

Use nbdkit vddk plugins for migration

This document describes the method to use nbdkit vddk plugins for migration from VMware to OpenStack. vddk plugins extensively makes it quite fast and takes less time to perform data migration.

Pre-requisite

  • Port 902 and 443 should connect from v2v appliance to VMware vCenter and Esxi hosts.
  • DNS should resolve the VMware FQDN inside the v2v virtual appliance.
  • nbdkit vddk plugins should be enabled on the virtual appliance.
  • vddk plugins must have been installed and configured on the virtual appliance. Refer Enable nbdkit vddk plugins to configure it.

Environment

Kindly refer below details which is used in this documentation. These IPs, FQDN can be different in your environment.

  • VMware Cloud - demo-vmware-cloud.com
  • OpenStack Cloud keystone public endpoint - 192.168.10.11
  • virt-v2v Virtual appliance - 192.168.11.11

Steps

Obtain SSL thumbprint

You are required to obtain SSL thumbprint for the VMware cloud URL. Below is the syntax to get the thumbprint.

openssl s_client -connect vcenter.example.com:443</dev/null | openssl x509 -in /dev/stdin -fingerprint -sha1 -noout  
Execute below command from cli to obtain SHA thumbprint.

On v2v appliance

openssl s_client -connect demo-vmware-cloud.com:443</dev/null | openssl x509 -in /dev/stdin -fingerprint -sha1 -noout
You will see below output once you execute above command.

Example output

depth=0 CN = demo-vmware-cloud.com, C = US, ST = California, L = Palo Alto, O = VMware, OU = VMware Engineering
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = demo-vmware-cloud.com, C = US, ST = California, L = Palo Alto, O = VMware, OU = VMware Engineering
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN = demo-vmware-cloud.com, C = US, ST = California, L = Palo Alto, O = VMware, OU = VMware Engineering
verify return:1
DONE
sha1 Fingerprint=34:64:92:85:F5:6G:k3:C4:7E:8Y:2L:E2:F5:65:43:76:2A:A6:4H:3G

Migrate virtual machine

Below is the command syntax to migrate the virtual machine using vddk plugins.

virt-v2v -ic 'vpx://user@vCenter.example.com/datacenter/clustername/hypervisor_fqdn?no_verify=1' \  
            -it vddk -io vddk-libdir=path_to_vmware_vddk_library \  
            -io vddk-thumbprint=xx:xx:xx:xx:xx:xx.... guestname \  
            -o openstack -ip password_file_for_user -oo verify-server-certificate=false \  
            -oo server-id=virtual_appliance_UUID  
Excute below command to perfrom the migaration.

On v2v appliance

virt-v2v -ic 'vpx://user-id@demo-vmware-cloud.com/DC1/DC1-Cluster-02/demo-hyp1-cloud.com?no_verify=1' \
            -it vddk -io vddk-libdir=/opt/vmware-vix-disklib-distrib/lib64 \
            -io vddk-thumbprint=34:64:92:85:F5:6G:k3:C4:7E:8Y:2L:E2:F5:65:43:76:2A:A6:4H:3G ubuntu20-mig\
            -o openstack -ip password.txt -oo verify-server-certificate=false -oo server-id='45dgftbbfddr6784fhskkei8v8483k'
You will notice the below output upon successful execution.

Example output

[   0.0] Setting up the source: -i libvirt -ic vpx://user-id@demo-vmware-cloud.com/DC1/DC1-Cluster-02/demo-hyp1-cloud.com?no_verify=1 -it vddk ubuntu20-mig
[   4.5] Opening the source
[  28.0] Inspecting the source
[  46.7] Checking for sufficient free disk space in the guest
[  46.7] Converting ubuntu release 22 to run on KVM
virt-v2v: The QEMU Guest Agent will be installed for this guest at first 
boot.
virt-v2v: This guest has virtio drivers installed.
[ 210.5] Mapping filesystem data to avoid copying unused and blank areas
[ 212.1] Closing the overlay
[ 212.3] Assigning disks to buses
[ 212.3] Checking if the guest needs BIOS or UEFI to boot
virt-v2v: This guest requires UEFI on the target to boot.
[ 212.3] Setting up the destination: -o openstack -oo server-id=45dgftbbfddr6784fhskkei8v8483k
[ 228.9] Copying disk 1/1
 100% [****************************************]
[ 512.0] Creating output metadata
[ 517.4] Finishing off

Follow further steps mentioned in doc VMware to OpenStack Migration using virt-v2v to create instance on migrated volume.

VMware to OpenStack Migration using virt-v2v

This document describes the path to migrate a virtual machine from VMware to OpenStack using virt-v2v vpx. You should use vddk plugins to make this process fast for which link is mentioned in the doc.

I used OpenStack volume on the destination cloud however one can select glance or local basis upon their used cases.

Enable nbdkit vddk plugins

This document describes the path to build and install vddk plugins for nbdkit which is required to migrate a virtual machine from VMware to OpenStack using vpx. Please keep in mind that it requires VMware proprietary library that you must download yourself.

Deploying Restic on OpenStack Flex with User Data

Restic is an open-source backup tool that focuses on providing secure, efficient, and easy-to-use backups. This blog post will show you how to automatically install and configure Restic on a fresh OpenStack VM using user data (cloud-init).

Here’s what you’ll accomplish:

  1. Install Restic to /usr/local/bin.
  2. Configure a systemd service and timer that runs every 12 hours to back the /home directory.
  3. Store backups in Swift using application credentials.

Getting Started with OpenTofu and OpenStack Flex

OpenTofu is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.

We will demonstrate using OpenTofu to build a three node environment. The environment will consist bastion server to connect in via ssh, a webserver to serve web content and a database server. An OpenStack account with appropriate permissions will be needed to build the environment.

Installing aaPanel on a Flex Instance

This documentation provides a comprehensive guide to installing the free version of aaPanel on an Ubuntu 24 instance running on Rackspace Flex Cloud. aaPanel is a lightweight, user-friendly control panel that simplifies server and website management by offering a graphical interface for handling web services, databases, and security configurations. There is also a paid version of aaPanel which you can purchase through them and install.

Create Octavia Loadbalancers dynamically with Kubernetes and Openstack Cloud Controller Manager

octavia

Load Balancers are essential in Kubernetes for exposing services to users in a cloud native way by distributing network traffic across multiple nodes, ensuring high availability, fault tolerance, and optimal performance for applications.

By integrating with OpenStack’s Load Balancer as a Service (LBaaS) solutions like Octavia, Kubernetes can automate the creation and management of these critical resources with the use of the Openstack Cloud Controller Manager. The controller will identify services of type LoadBalancer and will automagically create cloud Loadbalancers on Openstack Flex with the Kubernetes nodes as members.

Using ansible to create resources on flex cloud

Ansible has a wide range of modules available to create and manage resources like openstack flavors, images, keypairs, networks, routers among others on the flex cloud. These modules are available in the Openstack.Cloud ansible collection. In this post we will discuss creating resources on flex cloud using ansible modules.

Deploy a Fully Automated Talos Cluster in Under 180 Seconds with Pulumi TypeScript

pulumi talos-linux

Talos is a modern operating system designed for Kubernetes, providing a secure and minimal environment for running your clusters. Deploying Talos on OpenStack Flex can be streamlined using Pulumi, an infrastructure as code tool that allows you to define cloud resources using familiar programming languages like TypeScript.

In this guide, we'll walk through setting up the necessary network infrastructure on OpenStack Flex using Pulumi and TypeScript, preparing the groundwork for running Talos.