My Oracle Support Banner

OLVM: AdminPortal Events Reports Lots of "Sanlock resouce read failure,IO timeout" Errors (Doc ID 2760850.1)

Last updated on MARCH 17, 2021

Applies to:

Linux OS - Version Oracle Linux 7.7 with Unbreakable Enterprise Kernel [4.14.35] and later
Linux x86-64

Symptoms

In 'Event', there are lots of below errors:


engine.log:

2021-03-11 05:30:06,053+09 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) []Failed in 'SpmStatusVDS' method
2021-03-11 05:30:06,055+09 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), 
VDSM <Host> command SpmStatusVDS failed: (-202, 'Sanlock resource read failure', 'IO timeout')
2021-03-11 05:30:06,055+09 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) []Command 'SpmStatusVDSCommand(HostName = <Host>, 
SpmStatusVDSCommandParameters:{hostId='9c362c2c-26c3-44b0-828e-d6636ef724a2', storagePoolId='ca6ebe08-21c8-4374-b090-46010964619f'})' execution failed: VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, 
error = (-202, 'Sanlock resource read failure', 'IO timeout'), code = 100
2021-03-11 08:44:07,443+09 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [] Failed in 'SpmStatusVDS' method
2021-03-11 08:44:07,445+09 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), 
VDSM <Host> command SpmStatusVDS failed: (-202, 'Unable to read resource owners', 'IO timeout')
2021-03-11 08:44:07,445+09 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [] Command 'SpmStatusVDSCommand(HostName = <Host>, 
SpmStatusVDSCommandParameters:{hostId='9c362c2c-26c3-44b0-828e-d6636ef724a2', storagePoolId='ca6ebe08-21c8-4374-b090-46010964619f'})' execution failed: VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, 
error = (-202, 'Unable to read resource owners', 'IO timeout'), code = 100

vdsm.log:

2021-03-11 05:30:06,052+0900 INFO (jsonrpc/7) [vdsm.api] FINISH getSpmStatus error=(-202, 'Sanlock resource read failure', 'IO timeout') from=::ffff:10.
9.168.59,46524, task_id=1d93e28f-e683-4b4f-89cd-3f1b5cfe568a (api:52)
2021-03-11 05:30:06,053+0900 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='1d93e28f-e683-4b4f-89cd-3f1b5cfe568a') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in getSpmStatus
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 635, in getSpmStatus
status = self._getSpmStatusInfo(pool)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 629, in _getSpmStatusInfo
(pool.spmRole,) + pool.getSpmStatus()))
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 140, in getSpmStatus
return self._backend.getSpmStatus()
File "/usr/lib/python2.7/site-packages/vdsm/storage/spbackends.py", line 436, in getSpmStatus
lVer, spmId = self.masterDomain.inquireClusterLock()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 984, in inquireClusterLock
return self._manifest.inquireDomainLock()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 574, in inquireDomainLock
return self._domainLock.inquire(self.getDomainLease())
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line 451, in inquire
sector=self._block_size)
SanlockException: (-202, 'Sanlock resource read failure', 'IO timeout')

Message:

Mar 11 05:30:04 <Host> iscsid: iscsid: Kernel reported iSCSI connection 1:0 error (1022 - ISCSI_ERR_NOP_TIMEDOUT: A NOP has timed out) state (3)
Mar 11 05:30:10 <Host> kernel: device-mapper: multipath: Failing path 8:16.
Mar 11 05:30:10 <Host> kernel: device-mapper: multipath: Failing path 8:32.
Mar 11 05:30:10 <Host> kernel: device-mapper: multipath: Failing path 8:64.
Mar 11 05:30:10 <Host> kernel: device-mapper: multipath: Failing path 8:80.
Mar 11 05:30:10 <Host> kernel: device-mapper: multipath: Failing path 8:160.
....
Mar 11 05:30:10 <Host> multipathd: checker failed path 65:48 in map 360060e8010141790058b414900000012
Mar 11 05:30:10 <Host> multipathd: checker failed path 65:80 in map 360060e8010141790058b414900000014
Mar 11 05:30:10 <Host> multipathd: checker failed path 65:112 in map 360060e8010141790058b414900000016
Mar 11 05:30:10 <Host> multipathd: checker failed path 65:144 in map 360060e8010141790058b414900000018
Mar 11 05:30:10 <Host> multipathd: checker failed path 65:176 in map 360060e8010141790058b41490000001a

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.