My Oracle Support Banner

OLVM - Unable to Move a Virtual Disk for A Running Virtual Machine between Block Based Storage Domains (Doc ID 2725556.1)

Last updated on SEPTEMBER 06, 2024

Applies to:

Linux OS - Version Oracle Linux 7.8 with Unbreakable Enterprise Kernel [4.14.35] and later
Linux x86-64

Symptoms

It has been noticed that migrating a running VM's virtual disk between 2 FC storage domains fails with below error in engine.log and vdsm.log, 

Engine.log
=============

2020-10-09 12:23:15,786+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] START, VmReplicateDiskStartVDSCommand(HostName = <hostname>, VmReplicateDiskParameters:{hostId='8eb46e18-2726-44f4-bafe-b8188ed7efcf', vmId='21a53a1c-eaa7-43c2-90ee-36439a5aca2f', storagePoolId='05232fba-fc82-11ea-be6f-00163e2b470c', srcStorageDomainId='2d5cb33a-9be2-4137-a647-7c94d8d5e554', targetStorageDomainId='1b0ad66b-81f0-4e6e-b157-74bc717f0611', imageGroupId='e9d13176-7046-4c09-8b63-2ce9536ea35d', imageId='15bce295-9e22-400b-884f-9591cd9dcb05'}), log id: 730753aa
2020-10-09 12:23:16,489+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Failed in 'VmReplicateDiskStartVDS' method
2020-10-09 12:23:16,491+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM <hostname> command VmReplicateDiskStartVDS failed: Drive replication error
2020-10-09 12:23:16,491+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand' return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]'
2020-10-09 12:23:16,491+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] HostName = <hostname>
2020-10-09 12:23:16,491+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Command 'VmReplicateDiskStartVDSCommand(HostName = <hostname>, VmReplicateDiskParameters:{hostId='8eb46e18-2726-44f4-bafe-b8188ed7efcf', vmId='21a53a1c-eaa7-43c2-90ee-36439a5aca2f', storagePoolId='05232fba-fc82-11ea-be6f-00163e2b470c', srcStorageDomainId='2d5cb33a-9be2-4137-a647-7c94d8d5e554', targetStorageDomainId='1b0ad66b-81f0-4e6e-b157-74bc717f0611', imageGroupId='e9d13176-7046-4c09-8b63-2ce9536ea35d', imageId='15bce295-9e22-400b-884f-9591cd9dcb05'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55
2020-10-09 12:23:16,491+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 730753aa
2020-10-09 12:23:16,491+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Failed VmReplicateDiskStart (Disk 'e9d13176-7046-4c09-8b63-2ce9536ea35d' , VM '21a53a1c-eaa7-43c2-90ee-36439a5aca2f')
2020-10-09 12:23:16,491+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Command 'LiveMigrateDisk' id: 'd3274004-be23-4fe5-aec3-c1ffce02a1e5' with children [882eb327-7094-4cb8-9976-9548f6e9765e, 9b0140b0-f72b-41a6-ae5c-a9db9e417350] failed when attempting to perform the next operation, marking as 'ACTIVE'
2020-10-09 12:23:16,491+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55)
   at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526) [bll.jar:]
   at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233) [bll.jar:]
   at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:]
   at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:]
   at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
   at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_262]
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_262]
   at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:]
   at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_262]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_262]
   at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_262]
   at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:]

...
2020-10-09 12:23:16,492+02 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Command 'LiveMigrateDisk' id: 'd3274004-be23-4fe5-aec3-c1ffce02a1e5' child commands '[882eb327-7094-4cb8-9976-9548f6e9765e, 9b0140b0-f72b-41a6-ae5c-a9db9e417350]' executions were completed, status 'FAILED'
2020-10-09 12:23:17,512+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-85) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure.
2020-10-09 12:23:17,513+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-85) [f95d68ec-96a4-4cd3-9a22-f83ea7373bb7] Failed during live storage migration of disk 'e9d13176-7046-4c09-8b63-2ce9536ea35d' of vm '21a53a1c-eaa7-43c2-90ee-36439a5aca2f', attempting to end replication before deleting the target disk

 

source KVM host / vdsm.log
=================================

2020-10-09 12:23:17,686+0200 ERROR (jsonrpc/1) [virt.vm] (vmId='21a53a1c-eaa7-43c2-90ee-36439a5aca2f') Unable to start replication for sda to {u'domainID': u'1b0ad66b-81f0-4e6e-b157-74bc717f0611', 'volumeInfo': {'path': u'/rhev/data-center/mnt/blockSD/1b0ad66b-81f0-4e6e-b157-74bc717f0611/images/e9d13176-7046-4c09-8b63-2ce9536ea35d/15bce295-9e22-400b-884f-9591cd9dcb05', 'type': 'block'}, 'diskType': 'block', 'format': 'cow', 'cache': 'none', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'device': 'disk', 'path': u'/rhev/data-center/mnt/blockSD/1b0ad66b-81f0-4e6e-b157-74bc717f0611/images/e9d13176-7046-4c09-8b63-2ce9536ea35d/15bce295-9e22-400b-884f-9591cd9dcb05', 'propagateErrors': 'off', 'volumeChain': [{'domainID': u'1b0ad66b-81f0-4e6e-b157-74bc717f0611', 'leaseOffset': 108003328, 'path': u'/rhev/data-center/mnt/blockSD/1b0ad66b-81f0-4e6e-b157-74bc717f0611/images/e9d13176-7046-4c09-8b63-2ce9536ea35d/1f2d6e18-a49b-4ffa-802a-fa4cfc90860e', 'volumeID': '1f2d6e18-a49b-4ffa-802a-fa4cfc90860e', 'leasePath': '/dev/1b0ad66b-81f0-4e6e-b157-74bc717f0611/leases', 'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}, {'domainID': u'1b0ad66b-81f0-4e6e-b157-74bc717f0611', 'leaseOffset': 109051904, 'path': u'/rhev/data-center/mnt/blockSD/1b0ad66b-81f0-4e6e-b157-74bc717f0611/images/e9d13176-7046-4c09-8b63-2ce9536ea35d/15bce295-9e22-400b-884f-9591cd9dcb05', 'volumeID': '15bce295-9e22-400b-884f-9591cd9dcb05', 'leasePath': '/dev/1b0ad66b-81f0-4e6e-b157-74bc717f0611/leases', 'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}]} (vm:4600)
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4594, in diskReplicateStart
   self._startDriveReplication(drive)
 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4727, in _startDriveReplication
   self._dom.blockCopy(drive.name, destxml, flags=flags)
 File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f
   ret = attr(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
   ret = f(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper
   return func(inst, *args, **kwargs)
 File "/usr/lib64/python2.7/site-packages/libvirt.py", line 729, in blockCopy
   if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
libvirtError: unable to verify existence of block copy target: Function not implemented
2020-10-09 12:23:17,690+0200 INFO  (jsonrpc/1) [api.virt] FINISH diskReplicateStart return={'status': {'message': 'Drive replication error', 'code': 55}} from=::ffff:10.236.16.132,48600, flow_id=f95d68ec-96a4-4cd3-9a22-f83ea7373bb7, vmId=21a53a1c-eaa7-43c2-90ee-36439a5aca2f (api:54)
...

2020-10-09 12:23:18,716+0200 INFO  (jsonrpc/6) [api.virt] START diskReplicateFinish(srcDisk={u'device': u'disk', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'domainID': u'2d5cb33a-9be2-4137-a647-7c94d8d5e554', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}, dstDisk={u'device': u'disk', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'domainID': u'2d5cb33a-9be2-4137-a647-7c94d8d5e554', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}) from=::ffff:10.236.16.132,48600, flow_id=f95d68ec-96a4-4cd3-9a22-f83ea7373bb7, vmId=21a53a1c-eaa7-43c2-90ee-36439a5aca2f (api:48)
2020-10-09 12:23:18,716+0200 ERROR (jsonrpc/6) [api] FINISH diskReplicateFinish error=Replication not in progress.: {'driveName': 'sda', 'vmId': '21a53a1c-eaa7-43c2-90ee-36439a5aca2f', 'srcDisk': {u'device': u'disk', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'domainID': u'2d5cb33a-9be2-4137-a647-7c94d8d5e554', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}} (api:131)
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in method
   ret = func(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 580, in diskReplicateFinish
   return self.vm.diskReplicateFinish(srcDisk, dstDisk)
 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4636, in diskReplicateFinish
   srcDisk=srcDisk)
ReplicationNotInProgress: Replication not in progress.: {'driveName': 'sda', 'vmId': '21a53a1c-eaa7-43c2-90ee-36439a5aca2f', 'srcDisk': {u'device': u'disk', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'domainID': u'2d5cb33a-9be2-4137-a647-7c94d8d5e554', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}}
2020-10-09 12:23:18,717+0200 INFO  (jsonrpc/6) [api.virt] FINISH diskReplicateFinish return={'status': {'message': "Replication not in progress.: {'driveName': 'sda', 'vmId': '21a53a1c-eaa7-43c2-90ee-36439a5aca2f', 'srcDisk': {u'device': u'disk', u'poolID': u'05232fba-fc82-11ea-be6f-00163e2b470c', u'volumeID': u'15bce295-9e22-400b-884f-9591cd9dcb05', u'domainID': u'2d5cb33a-9be2-4137-a647-7c94d8d5e554', u'imageID': u'e9d13176-7046-4c09-8b63-2ce9536ea35d'}}", 'code': 88}} from=::ffff:10.236.16.132,48600, flow_id=f95d68ec-96a4-4cd3-9a22-f83ea7373bb7, vmId=21a53a1c-eaa7-43c2-90ee-36439a5aca2f (api:54)
...
2020-10-09 12:23:18,717+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call VM.diskReplicateFinish failed (error 88) in 0.00 seconds (__init__:312)

Changes

Moving a running VM's virtual disk between 2 block based storage domains. 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.