Provided by: libguestfs-tools_1.36.13-1ubuntu3_amd64 bug


       virt-v2v - Convert a guest to use KVM


        virt-v2v -ic vpx:// vmware_guest

        virt-v2v -ic vpx:// vmware_guest \
          -o rhv -os rhv.nfs:/export_domain --network ovirtmgmt

        virt-v2v -i libvirtxml guest-domain.xml -o local -os /var/tmp

        virt-v2v -i disk disk.img -o local -os /var/tmp

        virt-v2v -i disk disk.img -o glance

        virt-v2v -ic qemu:///system qemu_guest --in-place


       Virt-v2v converts guests from a foreign hypervisor to run on KVM.  It can read Linux and
       Windows guests running on VMware, Xen, Hyper-V and some other hypervisors, and convert
       them to KVM managed by libvirt, OpenStack, oVirt, Red Hat Virtualisation (RHV) or several
       other targets.

       There is also a companion front-end called virt-p2v(1) which comes as an ISO, CD or PXE
       image that can be booted on physical machines to virtualize those machines (physical to
       virtual, or p2v).

       This manual page documents the rewritten virt-v2v included in libguestfs ≥ 1.28.


                                 ┌────────────┐  ┌─────────▶ -o null
        -i disk ────────────┐    │            │ ─┘┌───────▶ -o local
        -i ova  ──────────┐ └──▶ │ virt-v2v   │ ──┘┌───────▶ -o qemu
                          └────▶ │ conversion │ ───┘┌────────────┐
        VMware─▶┌────────────┐   │ server     │ ────▶ -o libvirt │─▶ KVM
        Xen ───▶│ -i libvirt ──▶ │            │     │  (default) │
        ... ───▶│  (default) │   │            │ ──┐ └────────────┘
                └────────────┘   │            │ ─┐└──────▶ -o glance
        -i libvirtxml ─────────▶ │            │ ┐└─────────▶ -o rhv
                                 └────────────┘ └──────────▶ -o vdsm

       Virt-v2v has a number of possible input and output modes, selected using the -i and -o
       options.  Only one input and output mode can be selected for each run of virt-v2v.

       -i disk is used for reading from local disk images (mainly for testing).

       -i libvirt is used for reading from any libvirt source.  Since libvirt can connect to many
       different hypervisors, it is used for reading guests from VMware, RHEL 5 Xen and more.
       The -ic option selects the precise libvirt source.

       -i libvirtxml is used to read from libvirt XML files.  This is the method used by
       virt-p2v(1) behind the scenes.

       -i ova is used for reading from a VMware ova source file.

       -o glance is used for writing to OpenStack Glance.

       -o libvirt is used for writing to any libvirt target.  Libvirt can connect to local or
       remote KVM hypervisors.  The -oc option selects the precise libvirt target.

       -o local is used to write to a local disk image with a local libvirt configuration file
       (mainly for testing).

       -o qemu writes to a local disk image with a shell script for booting the guest directly in
       qemu (mainly for testing).

       -o rhv is used to write to a RHV / oVirt target.  -o vdsm is only used when virt-v2v runs
       under VDSM control.

       --in-place instructs virt-v2v to customize the guest OS in the input virtual machine,
       instead of creating a new VM in the target hypervisor.

   Convert from VMware vCenter server to local libvirt
       You have a VMware vCenter server called "", a datacenter called
       "Datacenter", and an ESXi hypervisor called "esxi".  You want to convert a guest called
       "vmware_guest" to run locally under libvirt.

        virt-v2v -ic vpx:// vmware_guest

       In this case you will most likely have to run virt-v2v as "root", since it needs to talk
       to the system libvirt daemon and copy the guest disks to /var/lib/libvirt/images.

       For more information see "INPUT FROM VMWARE VCENTER SERVER" below.

   Convert from VMware to RHV/oVirt
       This is the same as the previous example, except you want to send the guest to a RHV-M
       Export Storage Domain which is located remotely (over NFS) at "rhv.nfs:/export_domain".
       If you are unclear about the location of the Export Storage Domain you should check the
       settings on your RHV-M management console.  Guest network interface(s) are connected to
       the target network called "ovirtmgmt".

        virt-v2v -ic vpx:// vmware_guest \
          -o rhv -os rhv.nfs:/export_domain --network ovirtmgmt

       In this case the host running virt-v2v acts as a conversion server.

       Note that after conversion, the guest will appear in the RHV-M Export Storage Domain, from
       where you will need to import it using the RHV-M user interface.  (See "OUTPUT TO RHV").

   Convert disk image to OpenStack glance
       Given a disk image from another hypervisor that you want to convert to run on OpenStack
       (only KVM-based OpenStack is supported), you can do:

        virt-v2v -i disk disk.img -o glance

       See "OUTPUT TO GLANCE" below.

   Convert disk image to disk image
       Given a disk image from another hypervisor that you want to convert to run on KVM, you
       have two options.  The simplest way is to try:

        virt-v2v -i disk disk.img -o local -os /var/tmp

       where virt-v2v guesses everything about the input disk.img and (in this case) writes the
       converted result to /var/tmp.

       A more complex method is to write some libvirt XML describing the input guest (if you can
       get the source hypervisor to provide you with libvirt XML, then so much the better).  You
       can then do:

        virt-v2v -i libvirtxml guest-domain.xml -o local -os /var/tmp

       Since guest-domain.xml contains the path(s) to the guest disk image(s) you do not need to
       specify the name of the disk image on the command line.

       To convert a local disk image and immediately boot it in local qemu, do:

        virt-v2v -i disk disk.img -o qemu -os /var/tmp --qemu-boot


   Hypervisors (Input)
       VMware ESXi
           Must be managed by VMware vCenter ≥ 5.0.  Unmanaged, direct input from ESXi is not

       OVA exported from VMware
           OVAs from other hypervisors will not work.

       RHEL 5 Xen
       SUSE Xen
       Citrix Xen
           Citrix Xen has not been recently tested.

           Not recently tested.  Requires that you export the disk or use virt-p2v(1) on Hyper-V.

       Direct from disk images
           Only disk images exported from supported hypervisors, and using container formats
           supported by qemu.

       Physical machines
           Using the virt-p2v(1) tool.

   Hypervisors (Output)
       QEMU and KVM only.

   Virtualization management systems (Output)
       OpenStack Glance
       Red Hat Virtualization (RHV) 4.1 and up
       Local libvirt
           And hence virsh(1), virt-manager(1), and similar tools.

       Local disk

       Red Hat Enterprise Linux 3, 4, 5, 6, 7
       CentOS 3, 4, 5, 6, 7
       Scientific Linux 3, 4, 5, 6, 7
       Oracle Linux
       SLES 10 and up
       OpenSUSE 10 and up
       Debian 6 and up
       Ubuntu 10.04, 12.04, 14.04, 16.04, and up
       Windows XP to Windows 10 / Windows Server 2016
           We use Windows internal version numbers, see

           Currently NT 5.2 to NT 6.3 are supported.

           See "WINDOWS" below for additional notes on converting Windows guests.

   Guest firmware
       BIOS or UEFI for all guest types (but see "UEFI" below).



       -b ...
       --bridge ...
           See --network below.

           Use ANSI colour sequences to colourize messages.  This is the default when the output
           is a tty.  If the output of the program is redirected to a file, ANSI colour sequences
           are disabled unless you use this option.

           Write a compressed output file.  This is only allowed if the output format is qcow2
           (see -of below), and is equivalent to the -c option of qemu-img(1).

       --dcpath Folder/Datacenter
           NB: You don't need to use this parameter if you have libvirt ≥ 1.2.20.

           For VMware vCenter, override the "dcPath=..." parameter used to select the datacenter.
           Virt-v2v can usually calculate this from the "vpx://" URI, but if it gets it wrong,
           then you can override it using this setting.  Go to your vCenter web folder interface,
           eg.  "" (without a trailing slash), and examine the
           "dcPath=" parameter in the URLs that appear on this page.

           Save the overlay file(s) created during conversion.  This option is only used for
           debugging virt-v2v and may be removed in a future version.

       -i disk
           Set the input method to disk.

           In this mode you can read a virtual machine disk image with no metadata.  virt-v2v
           tries to guess the best default metadata.  This is usually adequate but you can get
           finer control (eg. of memory and vCPUs) by using -i libvirtxml instead.  Only guests
           that use a single disk can be imported this way.

       -i libvirt
           Set the input method to libvirt.  This is the default.

           In this mode you have to specify a libvirt guest name or UUID on the command line.
           You may also specify a libvirt connection URI (see -ic).

       -i libvirtxml
           Set the input method to libvirtxml.

           In this mode you have to pass a libvirt XML file on the command line.  This file is
           read in order to get metadata about the source guest (such as its name, amount of
           memory), and also to locate the input disks.  See "MINIMAL XML FOR -i libvirtxml
           OPTION" below.

       -i local
           This is the same as -i disk.

       -i ova
           Set the input method to ova.

           In this mode you can read a VMware ova file.  Virt-v2v will read the ova manifest file
           and check the vmdk volumes for validity (checksums)  as well as analyzing the ovf
           file, and then convert the guest.  See "INPUT FROM VMWARE OVA" below

       -ic libvirtURI
           Specify a libvirt connection URI to use when reading the guest.  This is only used
           when -i libvirt.

           Only local libvirt connections, VMware vCenter connections, or RHEL 5 Xen remote
           connections can be used.  Other remote libvirt connections will not work in general.


       -if format
           For -i disk only, this specifies the format of the input disk image.  For other input
           methods you should specify the input format in the metadata.

           Do not create an output virtual machine in the target hypervisor.  Instead, adjust the
           guest OS in the source VM to run in the input hypervisor.

           This mode is meant for integration with other toolsets, which take the responsibility
           of converting the VM configuration, providing for rollback in case of errors,
           transforming the storage, etc.

           See "IN PLACE CONVERSION" below.

           Conflicts with all -o * options.

           るために使用されます。以下の "マシン可読な出力" 参照。

       -n in:out
       -n out
       --network in:out
       --network out
       -b in:out
       -b out
       --bridge in:out
       --bridge out
           Map network (or bridge) called "in" to network (or bridge) called "out".  If no "in:"
           prefix is given, all other networks (or bridges)  are mapped to "out".

           See "NETWORKS AND BRIDGES" below.

           Don't copy the disks.  Instead, conversion is performed (and thrown away), and
           metadata is written, but no disks are created.  See also discussion of -o null below.

           This is useful in two cases: Either you want to test if conversion is likely to
           succeed, without the long copying process.  Or you are only interested in looking at
           the metadata.

           This option is not compatible with -o libvirt since it would create a faulty guest
           (one with no disks).

           This option is not compatible with -o glance for technical reasons.

       -o disk
           This is the same as -o local.

       -o glance
           Set the output method to OpenStack Glance.  In this mode the converted guest is
           uploaded to Glance.  See "OUTPUT TO GLANCE" below.

       -o libvirt
           Set the output method to libvirt.  This is the default.

           In this mode, the converted guest is created as a libvirt guest.  You may also specify
           a libvirt connection URI (see -oc).

           See "OUTPUT TO LIBVIRT" below.

       -o local
           Set the output method to local.

           In this mode, the converted guest is written to a local directory specified by -os
           /dir (the directory must exist).  The converted guest's disks are written as:


           and a libvirt XML file is created containing guest metadata:


           where "name" is the guest name.

       -o null
           Set the output method to null.

           The guest is converted and copied (unless you also specify --no-copy), but the results
           are thrown away and no metadata is written.

       -o ovirt
           This is the same as -o rhv.

       -o qemu
           Set the output method to qemu.

           This is similar to -o local, except that a shell script is written which you can use
           to boot the guest in qemu.  The converted disks and shell script are written to the
           directory specified by -os.

           When using this output mode, you can also specify the --qemu-boot option which boots
           the guest under qemu immediately.

       -o rhev
           This is the same as -o rhv.

       -o rhv
           Set the output method to rhv.

           The converted guest is written to a RHV Export Storage Domain.  The -os parameter must
           also be used to specify the location of the Export Storage Domain.  Note this does not
           actually import the guest into RHV.  You have to do that manually later using the UI.

           See "OUTPUT TO RHV" below.

       -o vdsm
           Set the output method to vdsm.

           This mode is similar to -o rhv, but the full path to the data domain must be given:
           /rhv/data-center/<data-center-uuid>/<data-domain-uuid>.  This mode is only used when
           virt-v2v runs under VDSM control.

       -oa sparse
       -oa preallocated
           Set the output file allocation mode.  The default is "sparse".

       -oc libvirtURI
           Specify a libvirt connection to use when writing the converted guest.  This is only
           used when -o libvirt.  See "OUTPUT TO LIBVIRT" below.

           Only local libvirt connections can be used.  Remote libvirt connections will not work.

       -of format
           When converting the guest, convert the disks to the given format.

           If not specified, then the input format is used.

       -on name
           Rename the guest when converting it.  If this option is not used then the output name
           is the same as the input name.

       -os storage
           The location of the storage for the converted guest.

           For -o libvirt, this is a libvirt directory pool (see "virsh pool-list") or pool UUID.

           For -o local and -o qemu, this is a directory name.  The directory must exist.

           For -o rhv, this can be an NFS path of the Export Storage Domain of the form
           "<host>:<path>", eg:


           The NFS export must be mountable and writable by the user and host running virt-v2v,
           since the virt-v2v program has to actually mount it when it runs.  So you probably
           have to run virt-v2v as "root".

           Or: You can mount the Export Storage Domain yourself, and point -os to the mountpoint.
           Note that virt-v2v will still need to write to this remote directory, so virt-v2v will
           still need to run as "root".

           You will get an error if virt-v2v is unable to mount/write to the Export Storage

       --password-file file
           Instead of asking for password(s) interactively, pass the password through a file.
           Note the file should contain the whole password, without any trailing newline, and for
           security the file should have mode 0600 so that others cannot read it.

           Print information about the source guest and stop.  This option is useful when you are
           setting up network and bridge maps.  See "NETWORKS AND BRIDGES".

           When using -o qemu only, this boots the guest immediately after virt-v2v finishes.

           This disables progress bars and other unnecessary output.

       --root ask
       --root single
       --root first
       --root /dev/sdX
       --root /dev/VG/LV
           Choose the root filesystem to be converted.

           In the case where the virtual machine is dual-boot or multi-boot, or where the VM has
           other filesystems that look like operating systems, this option can be used to select
           the root filesystem (a.k.a. "C:" drive or /) of the operating system that is to be
           converted.  The Windows Recovery Console, certain attached DVD drives, and bugs in
           libguestfs inspection heuristics, can make a guest look like a multi-boot operating

           The default in virt-v2v ≤ 0.7.1 was --root single, which causes virt-v2v to die if a
           multi-boot operating system is found.

           Since virt-v2v ≥ 0.7.2 the default is now --root ask: If the VM is found to be multi-
           boot, then virt-v2v will stop and list the possible root filesystems and ask the user
           which to use.  This requires that virt-v2v is run interactively.

           --root first means to choose the first root device in the case of a multi-boot
           operating system.  Since this is a heuristic, it may sometimes choose the wrong one.

           You can also name a specific root device, eg. --root /dev/sda2 would mean to use the
           second partition on the first hard drive.  If the named root device does not exist or
           was not detected as a root device, then virt-v2v will fail.

           Note that there is a bug in grub which prevents it from successfully booting a
           multiboot system if VirtIO is enabled.  Grub is only able to boot an operating system
           from the first VirtIO disk.  Specifically, /boot must be on the first VirtIO disk, and
           it cannot chainload an OS which is not in the first VirtIO disk.

           If -o vdsm and the output format is qcow2, then we add the qcow2 compat=0.10 option to
           the output file for compatibility with RHEL 6 (see

           If --vdsm-compat=1.1 is used then modern qcow2 (compat=1.1)  files are generated

           Currently --vdsm-compat=0.10 is the default, but this will change to --vdsm-compat=1.1
           in a future version of virt-v2v (when we can assume that everyone is using a modern
           version of qemu).

           Note this option only affects -o vdsm output.  All other output modes (including -o
           rhv) generate modern qcow2 compat=1.1 files, always.

           If this option is available, then "vdsm-compat-option" will appear in the
           --machine-readable output.

       --vdsm-image-uuid UUID
       --vdsm-vol-uuid UUID
       --vdsm-vm-uuid UUID
           Normally the RHV output mode chooses random UUIDs for the target guest.  However VDSM
           needs to control the UUIDs and passes these parameters when virt-v2v runs under VDSM
           control.  The parameters control:

           ·   the image directory of each guest disk (--vdsm-image-uuid) (this option is passed
               once for each guest disk)

           ·   UUIDs for each guest disk (--vdsm-vol-uuid) (this option is passed once for each
               guest disk)

           ·   the OVF file name (--vdsm-vm-uuid).

           ·   the OVF output directory (default current directory) (--vdsm-ovf-output).

           The format of UUIDs is: "12345678-1234-1234-1234-123456789abc" (each hex digit can be
           "0-9" or "a-f"), conforming to OSF DCE 1.1.

           These options can only be used with -o vdsm.



       -x  libguestfs API 呼び出しのトレースを有効にします。


       Older versions of virt-v2v could turn a Xen paravirtualized (PV) guest into a KVM guest by
       installing a new kernel.  This version of virt-v2v does not attempt to install any new
       kernels.  Instead it will give you an error if there are only Xen PV kernels available.

       Therefore before conversion you should check that a regular kernel is installed.  For some
       older Linux distributions, this means installing a kernel from the table below:

        RHEL 3         (Does not apply, as there was no Xen PV kernel)

        RHEL 4         i686 with > 10GB of RAM: install 'kernel-hugemem'
                       i686 SMP: install 'kernel-smp'
                       other i686: install 'kernel'
                       x86-64 SMP with > 8 CPUs: install 'kernel-largesmp'
                       x86-64 SMP: install 'kernel-smp'
                       other x86-64: install 'kernel'

        RHEL 5         i686: install 'kernel-PAE'
                       x86-64: install 'kernel'

        SLES 10        i586 with > 10GB of RAM: install 'kernel-bigsmp'
                       i586 SMP: install 'kernel-smp'
                       other i586: install 'kernel-default'
                       x86-64 SMP: install 'kernel-smp'
                       other x86-64: install 'kernel-default'

        SLES 11+       i586: install 'kernel-pae'
                       x86-64: install 'kernel-default'

        Windows        (Does not apply, as there is no Xen PV Windows kernel)


       "Virtio" is the name for a set of drivers which make disk (block device), network and
       other guest operations work much faster on KVM.

       Older versions of virt-v2v could install these drivers for certain Linux guests.  This
       version of virt-v2v does not attempt to install new Linux kernels or drivers, but will
       warn you if they are not installed already.

       In order to enable virtio, and hence improve performance of the guest after conversion,
       you should ensure that the minimum versions of packages are installed before conversion,
       by consulting the table below.

        RHEL 3         No virtio drivers are available

        RHEL 4         kernel >= 2.5.9-89.EL
                       lvm2 >= 2.02.42-5.el4
                       device-mapper >= 1.02.28-2.el4
                       selinux-policy-targeted >= 1.17.30-2.152.el4
                       policycoreutils >= 1.18.1-4.13

        RHEL 5         kernel >= 2.6.18-128.el5
                       lvm2 >= 2.02.40-6.el5
                       selinux-policy-targeted >= 2.4.6-203.el5

        RHEL 6+        All versions support virtio

        Fedora         All versions support virtio

        SLES 11+       All versions support virtio

        SLES 10        kernel >=

        OpenSUSE 11+   All versions support virtio

        OpenSUSE 10    kernel >=

        Debian 6+      All versions support virtio

        Ubuntu 10.04+  All versions support virtio

        Windows        Drivers are installed from the directory pointed to by
                       "VIRTIO_WIN" environment variable
                       (/usr/share/virtio-win by default) if present


   SELinux relabel appears to hang forever
       In RHEL ≤ 4.7 there was a bug which causes SELinux relabelling to appear to hang forever

        *** Warning -- SELinux relabel is required. ***
        *** Disabling security enforcement.         ***
        *** Relabeling could take a very long time, ***
        *** depending on file system size.          ***

       In reality it is waiting for you to press a key (but there is no visual indication of
       this).  You can either hit the "[Return]" key, at which point the guest will finish
       relabelling and reboot, or you can install policycoreutils ≥ 1.18.1-4.13 before starting
       the v2v conversion.  See also


   "warning: could not determine a way to update the configuration of Grub2"
       Currently, virt-v2v has no way to set the default kernel in Debian and Ubuntu guests using
       GRUB 2 as bootloader.  This means that virt-v2v will not change the default kernel used
       for booting, even in case it is not the best kernel available on the guest.  A recommended
       procedure is, before using virt-v2v, to check that the boot kernel is the best kernel
       available in the guest (for example by making sure the guest is up-to-date).


   Windows  8 Fast Startup is incompatible with virt-v2v
       Guests which use the Windows ≥ 8 "Fast Startup" feature (or guests which are hibernated)
       cannot be converted with virt-v2v.  You will see an error:

        virt-v2v: error: unable to mount the disk image for writing. This has
        probably happened because Windows Hibernation or Fast Restart is being
        used in this guest. You have to disable this (in the guest) in order
        to use virt-v2v.

       As the message says, you need to boot the guest and disable the "Fast Startup" feature
       (Control Panel → Power Options → Choose what the power buttons do → Change settings that
       are currently unavailable → Turn on fast startup), and shut down the guest, and then you
       will be able to convert it.

       For more information, see: "WINDOWS HIBERNATION AND WINDOWS 8 FAST STARTUP" in guestfs(3).

   Boot failure: 0x0000007B
       This boot failure is caused by Windows being unable to find or load the right disk driver
       (eg. viostor.sys).  If you experience this error, here are some things to check:

       ·   First ensure that the guest boots on the source hypervisor before conversion.

       ·   Check you have the Windows virtio drivers available in /usr/share/virtio-win, and that
           virt-v2v did not print any warning about not being able to install virtio drivers.

           On Red Hat Enterprise Linux 7, you will need to install the signed drivers available
           in the "virtio-win" package.  If you do not have access to the signed drivers, then
           you will probably need to disable driver signing in the boot menus.

       ·   Check that you are presenting a virtio-blk interface (not virtio-scsi and not ide) to
           the guest.  On the qemu/KVM command line you should see something similar to this:

            ... -drive file=windows-sda,if=virtio ...

           In libvirt XML, you should see:

            <target dev='vda' bus='virtio'/>

       ·   Check that Windows Group Policy does not prevent the driver from being installed or
           used.  Try deleting Windows Group Policy before conversion.

       ·   Check there is no anti-virus or other software which implements Group Policy-like
           prohibitions on installing or using new drivers.

       ·   Enable boot debugging and check the viostor.sys driver is being loaded.

   OpenStack and Windows reactivation
       OpenStack does not offer stable device / PCI addresses to guests.  Every time it creates
       or starts a guest, it regenerates the libvirt XML for that guest from scratch.  The
       libvirt XML will have no <address> fields.  Libvirt will then assign addresses to devices,
       in a predictable manner.  Addresses may change if any of the following are true:

       ·   A new disk or network device has been added or removed from the guest.

       ·   The version of OpenStack or (possibly) libvirt has changed.

       Because Windows does not like "hardware" changes of this kind, it may trigger Windows

       This can also prevent booting with a 7B error [see previous section] if the guest has
       group policy containing "Device Installation Restrictions".


       VMware allows you to present UEFI firmware to guests (instead of the ordinary PC BIOS).
       Virt-v2v can convert these guests, but requires that UEFI is supported by the target

       Currently KVM supports OVMF, an open source UEFI firmware, and can run these guests.

       Since OVMF support was only recently added to KVM (in 2014/2015), not all target
       environments support UEFI guests yet:

       UEFI on libvirt, qemu
           Supported.  Virt-v2v will generate the correct libvirt XML (metadata) automatically,
           but note that the same version of OVMF must be installed on the conversion host as is
           installed on the target hypervisor, else you will have to adjust paths in the

           On RHEL ≥ 7.3, only qemu-kvm-rhev (not qemu-kvm) is supported.

       UEFI on OpenStack
           Not supported.

       UEFI on RHV
           Not supported.


       Guests are usually connected to one or more networks, and when converted to the target
       hypervisor you usually want to reconnect those networks at the destination.  The options
       --network and --bridge allow you to do that.

       If you are unsure of what networks and bridges are in use on the source hypervisor, then
       you can examine the source metadata (libvirt XML, vCenter information, etc.).  Or you can
       run virt-v2v with the --print-source option which causes virt-v2v to print out the
       information it has about the guest on the source and then exit.

       In the --print-source output you will see a section showing the guest's Network Interface
       Cards (NICs):

        $ virt-v2v [-i ...] --print-source name
            Network "default" mac: 52:54:00:d0:cf:0e

       This is typical of a libvirt guest: It has a single network interface connected to a
       network called "default".

       To map a specific network to a target network, for example "default" on the source to
       "ovirtmgmt" on the target, use:

        virt-v2v [...] --network default:ovirtmgmt

       To map every network to a target network, use:

        virt-v2v [...] --network ovirtmgmt

       Bridges are handled in the same way, but you have to use the --bridge option instead.  For

        $ virt-v2v [-i ...] --print-source name
            Bridge "br0"

        $ virt-v2v [...] --bridge br0:targetbr


       Virt-v2v is able to import guests from VMware vCenter Server.

       vCenter ≥ 5.0 is required.  If you don't have vCenter, using OVA is recommended instead
       (see "INPUT FROM VMWARE OVA" below), or if that is not possible then see "INPUT FROM

       Virt-v2v uses libvirt for access to vCenter, and therefore the input mode should be -i
       libvirt.  As this is the default, you don't need to specify it on the command line.

       For Windows guests, you should remove VMware tools before conversion.  Although this is
       not strictly necessary, and the guest will still be able to run, if you don't do this then
       the converted guest will complain on every boot.  The tools cannot be removed after
       conversion because the uninstaller checks if it is running on VMware and refuses to start
       (which is also the reason that virt-v2v cannot remove them).

       This is not necessary for Linux guests, as virt-v2v is able to remove VMware tools.

       The libvirt URI of a vCenter server looks something like this:



           is the (optional, but recommended) user to connect as.

           If the username contains a backslash (eg. "DOMAIN\USER") then you will need to URI-
           escape that character using %5c: "DOMAIN%5cUSER" (5c is the hexadecimal ASCII code for
           backslash.)  Other punctuation may also have to be escaped.

           is the vCenter Server (not hypervisor).

           is the name of the datacenter.

           If the name contains a space, replace it with the URI-escape code %20.

           is the name of the ESXi hypervisor running the guest.

       If the VMware deployment is using folders, then these may need to be added to the URI, eg:


       For full details of libvirt URIs, see:

       Typical errors from libvirt / virsh when the URI is wrong include:

       ·   Could not find datacenter specified in [...]

       ·   Could not find compute resource specified in [...]

       ·   Path [...] does not specify a compute resource

       ·   Path [...] does not specify a host system

       ·   Could not find host system specified in [...]

       Use the virsh(1) command to list the guests on the vCenter Server like this:

        $ virsh -c 'vpx://' list --all
        Enter root's password for ***

         Id    Name                           State
         -     Fedora 20                      shut off
         -     Windows 2003                   shut off

       If you get an error "Peer certificate cannot be authenticated with given CA certificates"
       or similar, then you can either import the vCenter host's certificate, or bypass signature
       verification by adding the "?no_verify=1" flag:

        $ virsh -c 'vpx://' list --all

       You should also try dumping the metadata from any guest on your server, like this:

        $ virsh -c 'vpx://' dumpxml "Windows 2003"
        <domain type='vmware'>
          <name>Windows 2003</name>

       If the above commands do not work, then virt-v2v is not going to work either.  Fix your
       libvirt configuration and/or your VMware vCenter Server before continuing.

       To import a particular guest from vCenter Server, do:

        $ virt-v2v -ic 'vpx://' \
          "Windows 2003" \
          -o local -os /var/tmp

       where "Windows 2003" is the name of the guest (which must be shut down).

       Note that you may be asked for the vCenter password twice.  This happens once because
       libvirt needs it, and a second time because virt-v2v itself connects directly to the
       server.  Use --password-file to supply a password via a file.

       In this case the output flags are set to write the converted guest to a temporary
       directory as this is just an example, but you can also write to libvirt or any other
       supported target.

       Instead of using the vCenter Administrator role, you can create a custom non-administrator
       role to perform the conversion.  You will however need to give it a minimum set of
       permissions as follows:

       1.  Create a custom role in vCenter.

       2.  Enable (check) the following objects:

             - Browse datastore
             - Low level file operations

             - Validate session

            Virtual Machine:
                - Allow disk access
                - Allow read-only disk access
                - Guest Operating system management by VIX API

       vCenter: Ports

       If there is a firewall between the virt-v2v conversion server and the vCenter server, then
       you will need to open port 443 (https) and port 5480.

       Port 443 is used to copy the guest disk image(s).  Port 5480 is used to query vCenter for
       guest metadata.

       These port numbers are only the defaults.  It is possible to reconfigure vCenter to use
       other port numbers.  In that case you would need to specify those ports in the "vpx://"
       URI.  See "VCENTER: URI" above.

       These ports only apply to virt-v2v conversions.  You may have to open other ports for
       other vCenter functionality, for example the web user interface.  VMware documents the
       required ports for vCenter in their online documentation.

        ┌────────────┐   port 443 ┌────────────┐        ┌────────────┐
        │ virt-v2v   │────────────▶ vCenter    │────────▶ ESXi       │
        │ conversion │────────────▶ server     │        │ hypervisor │
        │ server     │  port 5480 │            │        │   ┌─────┐  │
        └────────────┘            └────────────┘        │   │guest│  │

       (In the diagram above the arrows show the direction in which the TCP connection is
       initiated, not necessarily the direction of data transfer.)

       Virt-v2v itself does not connect directly to the ESXi hypervisor containing the guest.
       However vCenter connects to the hypervisor and forwards the information, so if you have a
       firewall between vCenter and its hypervisors you may need to open additional ports
       (consult VMware documentation).

       The proxy environment variables ("https_proxy", "all_proxy", "no_proxy", "HTTPS_PROXY",
       "ALL_PROXY" and "NO_PROXY") are ignored when doing vCenter conversions.

       You may see this error:

         CURL: Error opening file: SSL: no alternative certificate subject
         name matches target host name

       (You may need to enable debugging with ‘virt-v2v -v -x’ to see this message).

       This can be caused by using an IP address instead of the fully-qualified DNS domain name
       of the vCenter server, ie.  use "vpx://"  instead of

       Another certificate problem can be caused by the vCenter server having a mismatching FQDN
       and IP address, for example if the server acquired a new IP address from DHCP.  To fix
       this you need to change your DHCP server or network configuration so that the vCenter
       server always gets a stable IP address.  After that log in to the vCenter server’s admin
       console at "https://vcenter:5480/".  Under the "Admin" tab, select "Certificate
       regeneration enabled" and then reboot it.


       Virt-v2v is able to import guests from VMware's OVA (Open Virtualization Appliance) files.
       Only OVAs exported from VMware vSphere will work.

       For Windows guests, you should remove VMware tools before conversion.  Although this is
       not strictly necessary, and the guest will still be able to run, if you don't do this then
       the converted guest will complain on every boot.  The tools cannot be removed after
       conversion because the uninstaller checks if it is running on VMware and refuses to start
       (which is also the reason that virt-v2v cannot remove them).

       This is not necessary for Linux guests, as virt-v2v is able to remove VMware tools.

       To create an OVA in vSphere, use the "Export OVF Template" option (from the VM context
       menu, or from the File menu).  Either "Folder of files" (OVF) or "Single file" (OVA) will
       work, but OVA is probably easier to deal with.  OVA files are really just uncompressed tar
       files, so you can use commands like "tar tf VM.ova" to view their contents.

       Create OVA with ovftool

       You can also use VMware's proprietary "ovftool":

        ovftool --noSSLVerify \
          vi:// \

       To connect to vCenter:

        ovftool  --noSSLVerify \
          vi:// \

       For Active Directory-aware authentication, you have to express the "@" character in the
       form of its ascii hex-code (%5c):


       To import an OVA file called VM.ova, do;

        $ virt-v2v -i ova VM.ova -o local -os /var/tmp

       If you exported the guest as a "Folder of files", or if you unpacked the OVA tarball
       yourself, then you can point virt-v2v at the directory containing the files:

        $ virt-v2v -i ova /path/to/files -o local -os /var/tmp


       Virt-v2v cannot access an ESXi hypervisor directly.  You should use the OVA method above
       (see "INPUT FROM VMWARE OVA") if possible, as it is much faster and requires much less
       disk space than the method described in this section.

       You can use the virt-v2v-copy-to-local(1) tool to copy the guest off the hypervisor into a
       local file, and then convert it.

       For Windows guests, you should remove VMware tools before conversion.  Although this is
       not strictly necessary, and the guest will still be able to run, if you don't do this then
       the converted guest will complain on every boot.  The tools cannot be removed after
       conversion because the uninstaller checks if it is running on VMware and refuses to start
       (which is also the reason that virt-v2v cannot remove them).

       This is not necessary for Linux guests, as virt-v2v is able to remove VMware tools.

   ESXi: URI
       The libvirt URI for VMware ESXi hypervisors will look something like this:


       The "?no_verify=1" parameter disables TLS certificate checking.

       Use the virsh(1) command to test the URI and list the remote guests available:

        $ virsh -c esx:// list --all
        Enter root's password for ***
         Id    Name                           State
         -     guest                          shut off

       Using the libvirt URI as the -ic option, copy one of the guests to the local machine:

        $ virt-v2v-copy-to-local -ic esx:// guest

       This creates guest.xml, guest-disk1, ...

       Perform the conversion of the guest using virt-v2v:

        $ virt-v2v -i libvirtxml guest.xml -o local -os /var/tmp

       Remove the guest.xml and guest-disk* files.


       Virt-v2v is able to import Xen guests from RHEL 5 Xen or SLES and openSUSE Xen hosts.

       Virt-v2v uses libvirt for access to the remote Xen host, and therefore the input mode
       should be -i libvirt.  As this is the default, you don't need to specify it on the command

       Currently you must enable passwordless SSH access to the remote Xen host from the virt-v2v
       conversion server.

       You must also use ssh-agent, and add your ssh public key to /root/.ssh/authorized_keys (on
       the Xen host).

       After doing this, you should check that passwordless access works from the virt-v2v server
       to the Xen host.  For example:

        $ ssh
        [ logs straight into the shell, no password is requested ]

       Note that password-interactive and Kerberos access are not supported.  You have to set up
       ssh access using ssh-agent and authorized_keys.

       With some modern ssh implementations, legacy crypto policies required to interoperate with
       RHEL 5 sshd are disabled.  To enable them you may need to run this command on the
       conversion server (ie. ssh client), but read update-crypto-policies(8) first:

        # update-crypto-policies LEGACY

       Use the virsh(1) command to list the guests on the remote Xen host:

        $ virsh -c xen+ssh:// list --all
         Id    Name                           State
         0     Domain-0                       running
         -     rhel49-x86_64-pv               shut off

       You should also try dumping the metadata from any guest on your server, like this:

        $ virsh -c xen+ssh:// dumpxml rhel49-x86_64-pv
        <domain type='xen'>

       If the above commands do not work, then virt-v2v is not going to work either.  Fix your
       libvirt configuration or the remote server before continuing.

       If the guest disks are located on a host block device, then the conversion will fail.  See
       "XEN OR SSH CONVERSIONS FROM BLOCK DEVICES" below for a workaround.

       To import a particular guest from a Xen server, do:

        $ LIBGUESTFS_BACKEND=direct \
              virt-v2v -ic 'xen+ssh://' \
                  rhel49-x86_64-pv \
                  -o local -os /var/tmp

       where "rhel49-x86_64-pv" is the name of the guest (which must be shut down).

       In this case the output flags are set to write the converted guest to a temporary
       directory as this is just an example, but you can also write to libvirt or any other
       supported target.

       Setting the backend to "direct" is a temporary workaround until libvirt bug 1140166 is

       Currently virt-v2v cannot directly access a Xen guest (or any guest located remotely over
       ssh) if that guest's disks are located on host block devices.

       To tell if a Xen guest uses host block devices, look at the guest XML.  You will see:

         <disk type='block' device='disk'>
           <source dev='/dev/VG/guest'/>

       where "type='block'", "source dev=" and "/dev/..." are all indications that the disk is
       located on a host block device.

       This happens because the qemu ssh block driver that we use to access remote disks uses the
       ssh sftp protocol, and this protocol cannot correctly detect the size of host block

       The workaround is to copy the guest over to the conversion server, using the separate
       virt-v2v-copy-to-local(1) tool, followed by running virt-v2v.  You will need sufficient
       space on the conversion server to store a full copy of the guest.

        virt-v2v-copy-to-local -ic xen+ssh:// guest
        virt-v2v -i libvirtxml guest.xml -o local -os /var/tmp
        rm guest.xml guest-disk*


       The -o libvirt option lets you upload the converted guest to a libvirt-managed host.
       There are several limitations:

       ·   You can only use a local libvirt connection [see below for how to workaround this].

       ·   The -os pool option must specify a directory pool, not anything more exotic such as
           iSCSI [but see below].

       ·   You can only upload to a KVM hypervisor.

       To output to a remote libvirt instance and/or a non-directory storage pool you have to use
       the following workaround:

       1.  Use virt-v2v in -o local mode to convert the guest disks and metadata into a local
           temporary directory:

            virt-v2v [...] -o local -os /var/tmp

           This creates two (or more) files in /var/tmp called:

            /var/tmp/NAME.xml     # the libvirt XML (metadata)
            /var/tmp/NAME-sda     # the guest's first disk

           (for "NAME" substitute the guest's name).

       2.  Upload the converted disk(s) into the storage pool called "POOL":

            size=$(stat -c%s /var/tmp/NAME-sda)
            virsh vol-create-as POOL NAME-sda $size --format raw
            virsh vol-upload --pool POOL NAME-sda /var/tmp/NAME-sda

       3.  Edit /var/tmp/NAME.xml to change /var/tmp/NAME-sda to the pool name.  In other words,
           locate the following bit of XML:

            <disk type='file' device='disk'>
              <driver name='qemu' type='raw' cache='none' />
              <source file='/var/tmp/NAME-sda' />
              <target dev='hda' bus='ide' />

           and change two things: The "type='file'" attribute must be changed to "type='volume'",
           and the "<source>" element must be changed to include "pool" and "volume" attributes:

            <disk type='volume' device='disk'>
              <source pool='POOL' volume='NAME-sda' />

       4.  Define the final guest in libvirt:

            virsh define /var/tmp/NAME.xml


       This section only applies to the -o rhv output mode.  If you use virt-v2v from the RHV-M
       user interface, then behind the scenes the import is managed by VDSM using the -o vdsm
       output mode (which end users should not try to use directly).

       You have to specify -o rhv and an -os option that points to the RHV-M Export Storage
       Domain.  You can either specify the NFS server and mountpoint, eg.
       "-os rhv-storage:/rhv/export", or you can mount that first and point to the directory
       where it is mounted, eg. "-os /tmp/mnt".  Be careful not to point to the Data Storage
       Domain by accident as that will not work.

       On successful completion virt-v2v will have written the new guest to the Export Storage
       Domain, but it will not yet be ready to run.  It must be imported into RHV using the UI
       before it can be used.

       In RHV ≥ 2.2 this is done from the Storage tab.  Select the export domain the guest was
       written to.  A pane will appear underneath the storage domain list displaying several
       tabs, one of which is "VM Import".  The converted guest will be listed here.  Select the
       appropriate guest an click "Import".  See the RHV documentation for additional details.

       If you export several guests, then you can import them all at the same time through the

       If you do not have an oVirt or RHV instance to test against, then you can test conversions
       by creating a directory structure which looks enough like a RHV-M Export Storage Domain to
       trick virt-v2v:

        mkdir /tmp/rhv
        mkdir /tmp/rhv/$uuid
        mkdir /tmp/rhv/$uuid/images
        mkdir /tmp/rhv/$uuid/master
        mkdir /tmp/rhv/$uuid/master/vms
        touch /tmp/rhv/$uuid/dom_md
        virt-v2v [...] -o rhv -os /tmp/rhv


       To output to OpenStack Glance, use the -o glance option.

       This runs the glance(1) CLI program which must be installed on the virt-v2v conversion
       host.  For authentication to work, you will need to set "OS_*" environment variables.  In
       most cases you can do this by sourcing a file called something like keystonerc_admin.

       Virt-v2v adds metadata for the guest to Glance, describing such things as the guest
       operating system and what drivers it requires.  The command "glance image-show" will
       display the metadata as "Property" fields such as "os_type" and "hw_disk_bus".

   Glance and sparseness
       Glance image upload doesn't appear to correctly handle sparseness.  For this reason, using
       qcow2 will be faster and use less space on the Glance server.  Use the virt-v2v -of qcow2

   Glance and multiple disks
       If the guest has a single disk, then the name of the disk in Glance will be the name of
       the guest.  You can control this using the -on option.

       Glance doesn't have a concept of associating multiple disks with a single guest, and Nova
       doesn't allow you to boot a guest from multiple Glance disks either.  If the guest has
       multiple disks, then the first (assumed to be the system disk) will have the name of the
       guest, and the second and subsequent data disks will be called "guestname-disk2",
       "guestname-disk3" etc.  It may be best to leave the system disk in Glance, and import the
       data disks to Cinder (see next section).

   Importing disks into Cinder
       Since most virt-v2v guests are "pets", Glance is perhaps not the best place to store them.
       There is no way for virt-v2v to upload directly to Cinder
       (  There are two ways to upload to Cinder:

       1.  Import the image to Glance first (ie. -o glance) and then copy it to Cinder:

            cinder create --image-id <GLANCE-IMAGE-UUID> <SIZE>

       2.  Create (through some other means) a new volume / LUN in your Cinder backing store.
           Migrate the guest to this volume (using -o local).  Then ask Cinder to take over
           management of the volume using:

            cinder manage <VOLUMEREF>


       The most important resource for virt-v2v appears to be network bandwidth.  Virt-v2v should
       be able to copy guest data at gigabit ethernet speeds or greater.

       Ensure that the network connections between servers (conversion server, NFS server,
       vCenter, Xen) are as fast and as low latency as possible.

   Disk space
       Virt-v2v places potentially large temporary files in $TMPDIR (which is /var/tmp if you
       don't set it).  Using tmpfs is a bad idea.

       For each guest disk, an overlay is stored temporarily.  This stores the changes made
       during conversion, and is used as a cache.  The overlays are not particularly large - tens
       or low hundreds of megabytes per disk is typical.  In addition to the overlay(s), input
       and output methods may use disk space, as outlined in the table below.

       -i ova
           This temporarily places a full copy of the uncompressed source disks in $TMPDIR.

       -o glance
           This temporarily places a full copy of the output disks in $TMPDIR.

       -o local
       -o qemu
           You must ensure there is sufficient space in the output directory for the converted

       -o null
           This temporarily places a full copy of the output disks in $TMPDIR.

       See also "Minimum free space check in the host" below.

   VMware vCenter resources
       Copying from VMware vCenter is currently quite slow, but we believe this to be an issue
       with VMware.  Ensuring the VMware ESXi hypervisor and vCenter are running on fast hardware
       with plenty of memory should alleviate this.

   Compute power and RAM
       Virt-v2v is not especially compute or RAM intensive.  If you are running many parallel
       conversions, then you may consider allocating one CPU core and 2 GB of RAM per running

       Virt-v2v can be run in a virtual machine.

       Virt-v2v attempts to optimize the speed of conversion by ignoring guest filesystem data
       which is not used.  This would include unused filesystem blocks, blocks containing zeroes,
       and deleted files.

       To do this, virt-v2v issues a non-destructive fstrim(8) operation.  As this happens to an
       overlay placed over the guest data, it does not affect the source in any way.

       If this fstrim operation fails, you will see a warning, but virt-v2v will continue anyway.
       It may run more slowly (in some cases much more slowly), because it is copying the unused
       parts of the disk.

       Unfortunately support for fstrim is not universal, and it also depends on specific details
       of the filesystem, partition alignment, and backing storage.  As an example, NTFS
       filesystems cannot be fstrimmed if they occupy a partition which is not aligned to the
       underlying storage.  That was the default on Windows before Vista.  As another example,
       VFAT filesystems (used by UEFI guests) cannot be trimmed at all.

       fstrim support in the Linux kernel is improving gradually, so over time some of these
       restrictions will be lifted and virt-v2v will work faster.


   Guest network configuration
       Virt-v2v cannot currently reconfigure a guest's network configuration.  If the converted
       guest is not connected to the same subnet as the source, its network configuration may
       have to be updated.  See also virt-customize(1).

   Converting a Windows guest
       When converting a Windows guests, the conversion process is split into two stages:

       1.  Offline conversion.

       2.  First boot.

       The guest will be bootable after the offline conversion stage, but will not yet have all
       necessary drivers installed to work correctly.  These will be installed automatically the
       first time the guest boots.

       N.B. Take care not to interrupt the automatic driver installation process when logging in
       to the guest for the first time, as this may prevent the guest from subsequently booting


   Free space in the guest
       Virt-v2v checks there is sufficient free space in the guest filesystem to perform the
       conversion.  Currently it checks:

       Linux root filesystem or Windows "C:" drive
           Minimum free space: 20 MB

       Linux /boot
           Minimum free space: 50 MB

           This is because we need to build a new initramfs for some Enterprise Linux

       Any other mountable filesystem
           Minimum free space: 10 MB

   Minimum free space check in the host
       You must have sufficient free space in the host directory used to store temporary overlays
       (except in --in-place mode).  To find out which directory this is, use:

        $ df -h "`guestfish get-cachedir`"
        Filesystem        Size  Used Avail Use% Mounted on
        /dev/mapper/root   50G   40G  6.8G  86% /

       and look under the "Avail" column.  Virt-v2v will refuse to do the conversion at all
       unless at least 1GB is available there.

       See also "RESOURCE REQUIREMENTS" above.


       Nothing in virt-v2v inherently needs root access, and it will run just fine as a non-root
       user.  However, certain external features may require either root or a special user:

       Mounting the Export Storage Domain
           When using -o rhv -os server:/esd virt-v2v has to have sufficient privileges to NFS
           mount the Export Storage Domain from "server".

           You can avoid needing root here by mounting it yourself before running virt-v2v, and
           passing -os /mountpoint instead, but first of all read the next section ...

       Writing to the Export Storage Domain as 36:36
           RHV-M cannot read files and directories from the Export Storage Domain unless they
           have UID:GID 36:36.  You will see VM import problems if the UID:GID is not correct.

           When you run virt-v2v -o rhv as root, virt-v2v attempts to create files and
           directories with the correct ownership.  If you run virt-v2v as non-root, it will
           probably still work, but you will need to manually change ownership after virt-v2v has

       Writing to libvirt
           When using -o libvirt, you may need to run virt-v2v as root so that it can write to
           the libvirt system instance (ie. "qemu:///system")  and to the default location for
           disk images (usually /var/lib/libvirt/images).

           You can avoid this by setting up libvirt connection authentication, see
   Alternatively, use -oc qemu:///session, which will
           write to your per-user libvirt instance.

       Writing to Glance
           This does not need root (in fact it probably won't work), but may require either a
           special user and/or for you to source a script that sets authentication environment
           variables.  Consult the Glance documentation.


       When you export to the RHV-M Export Storage Domain, and then import that guest through the
       RHV-M UI, you may encounter an import failure.  Diagnosing these failures is infuriatingly
       difficult as the UI generally hides the true reason for the failure.

       There are several log files of interest:

           In oVirt ≥ 4.1.0, VDSM preserves the virt-v2v log file for 30 days in this directory.

           This directory is found on the host which performed the conversion.  The host can be
           selected in the import dialog, or can be found under the "Events" tab in oVirt

           As above, this file is present on the host which performed the conversion.  It
           contains detailed error messages from low-level operations executed by VDSM, and is
           useful if the error was not caused by virt-v2v, but by VDSM.

           This log file is stored on the RHV-M server.  It contains more detail for any errors
           caused by the oVirt GUI.


       When using the -i libvirtxml option, you have to supply some libvirt XML.  Writing this
       from scratch is hard, so the template below is helpful.

       Note this should only be used for testing and/or where you know what you're doing! If you
       have libvirt metadata for the guest, always use that instead.

        <domain type='kvm'>
          <name> NAME </name>
            <boot dev='hd'/>
            <disk type='file' device='disk'>
              <driver name='qemu' type='raw'/>
              <source file='/path/to/disk/image'/>
              <target dev='hda' bus='ide'/>
            <interface type='network'>
              <mac address='52:54:00:01:02:03'/>
              <source network='default'/>
              <model type='rtl8139'/>


       It is also possible to use virt-v2v in scenarios where a foreign VM has already been
       imported into a KVM-based hypervisor, but still needs adjustments in the guest to make it
       run in the new virtual hardware.

       In that case it is assumed that a third-party tool has created the target VM in the
       supported KVM-based hypervisor based on the source VM configuration and contents, but
       using virtual devices more appropriate for KVM (e.g. virtio storage and network, etc.).

       Then, to make the guest OS boot and run in the changed environment, one can use:

        virt-v2v -ic qemu:///system converted_vm --in-place

       Virt-v2v will analyze the configuration of "converted_vm" in the "qemu:///system" libvirt
       instance, and apply various fixups to the guest OS configuration to make it match the VM
       configuration.  This may include installing virtio drivers, configuring the bootloader,
       the mountpoints, the network interfaces, and so on.

       Should an error occur during the operation, virt-v2v exits with an error code leaving the
       VM in an undefined state.


       The --machine-readable option can be used to make the output more machine friendly, which
       is useful when calling virt-v2v from other programs, GUIs etc.

       このオプションを使用するには 2 通りの方法があります。

       Firstly use the option on its own to query the capabilities of the virt-v2v binary.
       Typical output looks like this:

        $ virt-v2v --machine-readable

       A list of features is printed, one per line, and the program exits with status 0.

       The "input:" and "output:" features refer to -i and -o (input and output mode) options
       supported by this binary.  The "convert:" features refer to guest types that this binary
       knows how to convert.

       Secondly use the option in conjunction with other options to make the regular program
       output more machine friendly.

       At the moment this means:

       1.  Progress bar messages can be parsed from stdout by looking for this regular


       2.  The calling program should treat messages sent to stdout (except for progress bar
           messages) as status messages.  They can be logged and/or displayed to the user.

       3.  The calling program should treat messages sent to stderr as error messages.  In
           addition, virt-v2v exits with a non-zero status code if there was a fatal error.

       Virt-v2v ≤ 0.9.1 did not support the --machine-readable option at all.  The option was
       added when virt-v2v was rewritten in 2014.



           If this directory is present, then virtio drivers for Windows guests will be found
           from this directory and installed in the guest during conversion.


           Location of the temporary directory used for the potentially large temporary overlay

           See the "Disk space" section above.

           This can point to the directory containing data files used for Windows conversion.

           Normally you do not need to set this.  If not set, a compiled-in default will be used
           (something like /usr/share/virt-tools).

           This directory may contain the following files:

               (Required when doing conversions of Windows guests)

               This is the RHSrvAny Windows binary, used to install a "firstboot" script in the
               guest during conversion of Windows guests.

               See also: ""

               This is a Windows binary shipped with SUSE VMDP, used to install a "firstboot"
               script in Windows guests.  It is required if you intend to use the --firstboot or
               --firstboot-command options with Windows guests.


               The RHV Application Provisioning Tool (RHEV APT).  If this file is present, then
               RHEV APT will be installed in the Windows guest during conversion.  This tool is a
               guest agent which ensures that the virtio drivers remain up to date when the guest
               is running on Red Hat Virtualization (RHV).

               This file comes from Red Hat Virtualization (RHV), and is not distributed with

           This is where VirtIO drivers for Windows are searched for (/usr/share/virtio-win if
           unset).  It can be a directory or point to virtio-win.iso (CD ROM image containing

           See "ENABLING VIRTIO".

       他の環境変数は "環境変数" in guestfs(3) を参照してください。


           There are some special cases where virt-v2v cannot directly access the remote
           hypervisor.  In that case you have to use virt-v2v-copy-to-local(1) to make a local
           copy of the guest first, followed by running "virt-v2v -i libvirtxml" to perform the

           Variously called "engine-image-uploader", "ovirt-image-uploader" or
           "rhevm-image-uploader", this tool allows you to copy a guest from one oVirt or RHV
           Export Storage Domain to another.  It only permits importing a guest that was
           previously exported from another oVirt/RHV instance.
           This script can be used to import guests that already run on KVM to oVirt or RHV.  For
           more information, see this blog posting by the author of virt-v2v:



       virt-p2v(1), virt-customize(1), virt-df(1), virt-filesystems(1), virt-sparsify(1),
       virt-sysprep(1), guestfs(3), guestfish(1), qemu-img(1), virt-v2v-copy-to-local(1),
       virt-v2v-test-harness(1), engine-image-uploader(8),,


       Richard W.M. Jones

       Matthew Booth

       Mike Latimer

       Pino Toscano

       Shahar Havivi

       Tingting Zheng


       Copyright (C) 2009-2017 Red Hat Inc.



       To get a list of bugs against libguestfs, use this link:

       To report a new bug against libguestfs, use this link:

       When reporting a bug, please supply:

       ·   The version of libguestfs.

       ·   Where you got libguestfs (eg. which Linux distro, compiled from source, etc)

       ·   Describe the bug accurately and give a way to reproduce it.

       ·   Run libguestfs-test-tool(1) and paste the complete, unedited output into the bug