My Oracle Support Banner

Oracle VM Server Get Reboot, Crash, After Start VM drivers/block/xen-blkback/blkback.c:1272 (Doc ID 2500891.1)

Last updated on APRIL 21, 2023

Applies to:

Oracle VM - Version 3.4.5 and later
Information in this document applies to any platform.

Symptoms

OVS-server crashes When trying to start VM with the following logs collected in var/log/messages:

var/log/messages of OVS-DRV
Description
--------------
OVS kernel: [ 9380.196246] o2dlm: Node 1 joins domain ovm ( 0 1 ) 2 nodes
OVS kernel: [ 9600.696097] o2net: Connection to node OVS1 (num 1) at 10.11.12.X:7777 has been idle for 60.37 secs.
OVS kernel: [ 9660.855754] o2net: Connection to node OVS1 (num 1) at 10.11.12.X:7777 has been idle for 60.161 secs.
OVS kernel: [ 9721.013399] o2net: Connection to node OVS1 (num 1) at 10.11.12.X:7777 has been idle for 60.159 secs.
OVS kernel: [ 9726.693271] (kworker/15:3,2392,15):o2quo_make_decision:170 not fencing this node, heartbeating: 2, connected: 1, lowest: 0 (reachable)
OVS kernel: [ 9745.284180] INFO: task python:32765 blocked for more than 120 seconds.
OVS kernel: [ 9745.284412] Not tainted 4.1.12-124.23.2.el6uek.x86_64 #2
OVS kernel: [ 9745.284656] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
OVS kernel: [ 9745.285147] python D ffff880187d18480 0 32765 1 0x00000080
OVS kernel: [ 9745.285162] ffff8801487bb668 0000000000000286 ffff8801487bb748 ffff8801819f9c00
OVS kernel: [ 9745.285165] ffff8801487bb688 ffff8801487bc000 ffff8801487bb7a8 ffff8801487bb780
OVS kernel: [ 9745.285167] ffffffffa0682bd8 ffff8801487bb798 ffff8801487bb688 ffffffff816ec5c7
OVS kernel: [ 9745.285170] Call Trace:
OVS kernel: [ 9745.285185] [<ffffffff816ec5c7>] schedule+0x37/0x90
OVS kernel: [ 9745.285208] [<ffffffffa0677cbd>] o2net_send_message_vec+0x64d/0xb20 [ocfs2_nodemanager]
OVS kernel: [ 9745.285215] [<ffffffff810cbe28>] ? __wake_up+0x48/0x60
OVS kernel: [ 9745.285222] [<ffffffff810bfb6a>] ? update_curr+0xea/0x190
OVS kernel: [ 9745.285226] [<ffffffff810cc2a0>] ? wait_woken+0x90/0x90
OVS kernel: [ 9745.285229] [<ffffffff816ebfbe>] ? __schedule+0x23e/0x810
OVS kernel: [ 9745.285231] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [ 9745.285235] [<ffffffffa06781b9>] o2net_send_message+0x29/0x30 [ocfs2_nodemanager]
OVS kernel: [ 9745.285241] [<ffffffffa06c706b>] dlm_do_master_request.isra.17+0x10b/0x780 [ocfs2_dlm]
OVS kernel: [ 9745.285244] [<ffffffff810f375a>] ? hrtimer_try_to_cancel+0x4a/0x110
OVS kernel: [ 9745.285248] [<ffffffffa06cb0a0>] dlm_get_lock_resource+0xb50/0x1230 [ocfs2_dlm]
OVS kernel: [ 9745.285283] [<ffffffffa0751cd2>] ? ocfs2_should_refresh_lock_res+0xe2/0x170 [ocfs2]
OVS kernel: [ 9745.285291] [<ffffffffa06d4253>] ? dlm_new_lock+0x33/0x140 [ocfs2_dlm]
OVS kernel: [ 9745.285311] [<ffffffffa05a1000>] ? 0xffffffffa05a1000
OVS kernel: [ 9745.285315] [<ffffffffa06d48bd>] dlmlock+0x55d/0x1720 [ocfs2_dlm]
OVS kernel: [ 9745.285318] [<ffffffffa05a1020>] ? o2dlm_lock_ast_wrapper+0x20/0x20 [ocfs2_stack_o2cb]
OVS kernel: [ 9745.285327] [<ffffffffa0750e42>] ? ocfs2_lock_res_init_common.isra.20+0x42/0xa0 [ocfs2]
OVS kernel: [ 9745.285336] [<ffffffffa07533e2>] ? ocfs2_file_lock_res_init+0x72/0x90 [ocfs2]
OVS kernel: [ 9745.285339] [<ffffffffa05a1341>] o2cb_dlm_lock+0x61/0x90 [ocfs2_stack_o2cb]
OVS kernel: [ 9745.285341] [<ffffffffa05a1000>] ? 0xffffffffa05a1000
OVS kernel: [ 9745.285343] [<ffffffffa05a1020>] ? o2dlm_lock_ast_wrapper+0x20/0x20 [ocfs2_stack_o2cb]
OVS kernel: [ 9745.285346] [<ffffffffa060b3bb>] ocfs2_dlm_lock+0x2b/0x50 [ocfs2_stackglue]
OVS kernel: [ 9745.285365] [<ffffffffa07547d8>] ocfs2_lock_create+0xa8/0x2f0 [ocfs2]
OVS kernel: [ 9745.285377] [<ffffffffa0751cd2>] ? ocfs2_should_refresh_lock_res+0xe2/0x170 [ocfs2]
OVS kernel: [ 9745.285384] [<ffffffff810209e9>] ? read_tsc+0x9/0x10
OVS kernel: [ 9745.285393] [<ffffffffa0754f04>] ocfs2_file_lock+0x114/0x6a0 [ocfs2]
OVS kernel: [ 9745.285400] [<ffffffffa0752b3e>] ? ocfs2_inode_lock_update+0x7e/0x640 [ocfs2]
OVS kernel: [ 9745.285403] [<ffffffff816f1268>] ? _raw_spin_lock_irqsave+0x28/0x110
OVS kernel: [ 9745.285414] [<ffffffffa0772b6c>] ocfs2_do_flock.isra.4+0xbc/0x1b0 [ocfs2]
OVS kernel: [ 9745.285422] [<ffffffff81124152>] ? from_kgid_munged+0x12/0x20
OVS kernel: [ 9745.285426] [<ffffffff81210db4>] ? cp_new_stat+0x144/0x160
OVS kernel: [ 9745.285430] [<ffffffff811eb602>] ? kmem_cache_alloc+0x222/0x250
OVS kernel: [ 9745.285439] [<ffffffffa0772cc0>] ocfs2_flock+0x60/0xe0 [ocfs2]
OVS kernel: [ 9745.285445] [<ffffffff8125f156>] ? locks_alloc_lock+0x66/0x70
OVS kernel: [ 9745.285447] [<ffffffff812620f0>] SyS_flock+0x110/0x1b0
OVS kernel: [ 9745.285450] [<ffffffff816f1b9f>] ? system_call_after_swapgs+0xe9/0x190
OVS kernel: [ 9745.285454] [<ffffffff816f1b98>] ? system_call_after_swapgs+0xe2/0x190
OVS kernel: [ 9745.285460] [<ffffffff816f1b91>] ? system_call_after_swapgs+0xdb/0x190
OVS kernel: [ 9745.285463] [<ffffffff816f1c5e>] system_call_fastpath+0x18/0xd8
OVS kernel: [ 9781.172046] o2net: Connection to node OVS1 (num 1) at 10.11.12.X:7777 has been idle for 60.160 secs.
OVS kernel: [ 9841.330652] o2net: Connection to node OVS1 (num 1) at 10.11.12.X:7777 has been idle for 60.160 secs.
OVS kernel: [ 9849.912947] o2net: No longer connected to node OVS1 (num 1) at 10.11.12.X:7777
OVS kernel: [ 9849.913019] (python,32765,12):dlm_do_master_request:1375 ERROR: link to 1 went down!
OVS kernel: [ 9849.913060] o2cb: o2dlm has evicted node 1 from domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [ 9849.913287] (python,32765,12):dlm_get_lock_resource:960 ERROR: status = -112
OVS kernel: [ 9849.913320] o2cb: o2dlm has evicted node 1 from domain 0004FB00000500008E8F542DC8E1XXXX
OVS kernel: [ 9849.913563] (python,32765,12):dlm_restart_lock_mastery:1264 ERROR: node down! 1
OVS kernel: [ 9849.913662] o2cb: o2dlm has evicted node 1 from domain ovm
OVS kernel: [ 9849.914004] (python,32765,12):dlm_wait_for_lock_mastery:1081 ERROR: status = -11
OVS kernel: [ 9850.914465] o2dlm: Waiting on the recovery of node 1 in domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [ 9850.914466] o2dlm: Waiting on the recovery of node 1 in domain 0004FB00000500008E8F542DC8E1XXXX
OVS kernel: [ 9850.915504] o2dlm: Waiting on the recovery of node 1 in domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [ 9853.354414] o2dlm: Begin recovery on domain 0004FB0000050000C9C951690F08XXXX for node 1
OVS kernel: [ 9853.354436] o2dlm: Node 0 (me) is the Recovery Master for the dead node 1 in domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [14305.183790] kworker/u40:2 D ffff880187a18480 0 24626 2 0x00000080
OVS kernel: [14305.183836] Workqueue: ocfs2_wq ocfs2_orphan_scan_work [ocfs2]
OVS kernel: [14305.183845] ffff88013b807ae8 0000000000000246 ffff88013b807aa8 ffff88017cf98e00
OVS kernel: [14305.183848] ffff88016366dcc8 ffff88013b808000 ffff88013b807cb0 7fffffffffffffff
OVS kernel: [14305.183850] ffff88017cf98e00 ffff88017cb09898 ffff88013b807b08 ffffffff816ec5c7
OVS kernel: [14305.183852] Call Trace:
OVS kernel: [14305.183860] [<ffffffff816ec5c7>] schedule+0x37/0x90
OVS kernel: [14305.183863] [<ffffffff816efe0c>] schedule_timeout+0x24c/0x2d0
OVS kernel: [14305.183866] [<ffffffff816edd1c>] wait_for_completion+0x11c/0x180
OVS kernel: [14305.183872] [<ffffffff810b4e10>] ? wake_up_state+0x20/0x20
OVS kernel: [14305.183881] [<ffffffffa0756d56>] __ocfs2_cluster_lock.isra.36+0x336/0x9e0 [ocfs2]
OVS kernel: [14305.183886] [<ffffffff81199461>] ? bdi_dirty_limit+0x31/0xc0
OVS kernel: [14305.183891] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183905] [<ffffffffa0757901>] ocfs2_orphan_scan_lock+0x81/0xe0 [ocfs2]
OVS kernel: [14305.183912] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183922] [<ffffffffa076e49c>] ocfs2_queue_orphan_scan+0x5c/0x2a0 [ocfs2]
OVS kernel: [14305.183924] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183926] [<ffffffff816ebfbe>] ? __schedule+0x23e/0x810
OVS kernel: [14305.183928] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183930] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183939] [<ffffffffa076e713>] ocfs2_orphan_scan_work+0x33/0xb0 [ocfs2]
OVS kernel: [14305.183947] [<ffffffff810a02f9>] process_one_work+0x169/0x4a0
OVS kernel: [14305.183953] [<ffffffff810a0b2b>] worker_thread+0x5b/0x560
OVS kernel: [14305.183955] [<ffffffff810a0ad0>] ? flush_delayed_work+0x50/0x50
OVS kernel: [14305.183958] [<ffffffff810a68fb>] kthread+0xcb/0xf0
OVS kernel: [14305.183960] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183962] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14305.183964] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
OVS kernel: [14305.183967] [<ffffffff816f20c1>] ret_from_fork+0x61/0x90
OVS kernel: [14305.183969] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
OVS kernel: [14425.181119] INFO: task kworker/u40:2:24626 blocked for more than 120 seconds.
OVS kernel: [14425.181353] Not tainted 4.1.12-124.23.2.el6uek.x86_64 #2
OVS kernel: [14425.181577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
OVS kernel: [14425.182067] kworker/u40:2 D ffff880187a18480 0 24626 2 0x00000080
OVS kernel: [14425.182114] Workqueue: ocfs2_wq ocfs2_orphan_scan_work [ocfs2]
OVS kernel: [14425.182121] ffff88013b807ae8 0000000000000246 ffff88013b807aa8 ffff88017cf98e00
OVS kernel: [14425.182124] ffff88016366dcc8 ffff88013b808000 ffff88013b807cb0 7fffffffffffffff
OVS kernel: [14425.182126] ffff88017cf98e00 ffff88017cb09898 ffff88013b807b08 ffffffff816ec5c7
OVS kernel: [14425.182128] Call Trace:
OVS kernel: [14425.182137] [<ffffffff816ec5c7>] schedule+0x37/0x90
OVS kernel: [14425.182139] [<ffffffff816efe0c>] schedule_timeout+0x24c/0x2d0
OVS kernel: [14425.182142] [<ffffffff816edd1c>] wait_for_completion+0x11c/0x180
OVS kernel: [14425.182146] [<ffffffff810b4e10>] ? wake_up_state+0x20/0x20
OVS kernel: [14425.182168] [<ffffffffa0756d56>] __ocfs2_cluster_lock.isra.36+0x336/0x9e0 [ocfs2]
OVS kernel: [14425.182172] [<ffffffff81199461>] ? bdi_dirty_limit+0x31/0xc0
OVS kernel: [14425.182179] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182192] [<ffffffffa0757901>] ocfs2_orphan_scan_lock+0x81/0xe0 [ocfs2]
OVS kernel: [14425.182197] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182207] [<ffffffffa076e49c>] ocfs2_queue_orphan_scan+0x5c/0x2a0 [ocfs2]
OVS kernel: [14425.182209] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182211] [<ffffffff816ebfbe>] ? __schedule+0x23e/0x810
OVS kernel: [14425.182213] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182215] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182228] [<ffffffffa076e713>] ocfs2_orphan_scan_work+0x33/0xb0 [ocfs2]
OVS kernel: [14425.182234] [<ffffffff810a02f9>] process_one_work+0x169/0x4a0
OVS kernel: [14425.182237] [<ffffffff810a0b2b>] worker_thread+0x5b/0x560
OVS kernel: [14425.182239] [<ffffffff810a0ad0>] ? flush_delayed_work+0x50/0x50
OVS kernel: [14425.182242] [<ffffffff810a68fb>] kthread+0xcb/0xf0
OVS kernel: [14425.182244] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182246] [<ffffffff816ebfca>] ? __schedule+0x24a/0x810
OVS kernel: [14425.182248] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
OVS kernel: [14425.182251] [<ffffffff816f20c1>] ret_from_fork+0x61/0x90
OVS kernel: [14425.182253] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
OVS kernel: [14545.179402] INFO: task kworker/u40:2:24626 blocked for more than 120 seconds.
OVS kernel: [14545.179673] Not tainted 4.1.12-124.23.2.el6uek.x86_64 #2
OVS kernel: [14545.179983] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.

OVS kernel: [253356.417924] o2cb: o2dlm has evicted node 1 from domain ovm

OVS kernel: [253356.430184] ocfs2: Begin replay journal (node 1, slot 1) on device (249,3)
OVS kernel: [253356.433271] ocfs2: Begin replay journal (node 1, slot 1) on device (249,22)
OVS kernel: [253356.437377] ocfs2: End replay journal (node 1, slot 1) on device (249,22)
OVS kernel: [253356.440010] ocfs2: End replay journal (node 1, slot 1) on device (249,3)
OVS kernel: [253356.463134] ocfs2: Beginning quota recovery on device (249,3) for slot 1
OVS kernel: [253356.465408] ocfs2: Beginning quota recovery on device (249,22) for slot 1
OVS kernel: [253356.472054] ocfs2: Finishing quota recovery on device (249,3) for slot 1
OVS kernel: [253356.480081] ocfs2: Finishing quota recovery on device (249,22) for slot 1
OVS kernel: [253357.239730] (python,32765,16):dlm_restart_lock_mastery:1264 ERROR: node down! 1
OVS kernel: [253357.239982] (python,32765,16):dlm_wait_for_lock_mastery:1081 ERROR: status = -11
OVS kernel: [253358.131715] o2dlm: Begin recovery on domain 0004FB0000050000C9C951690F08XXXX for node 1
OVS kernel: [253358.131734] o2dlm: Node 0 (me) is the Recovery Master for the dead node 1 in domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [253358.131781] o2dlm: End recovery on domain 0004FB0000050000C9C951690F08XXXX
OVS kernel: [253358.283729] o2dlm: Begin recovery on domain 0004FB00000500008E8F542DC8E1XXXX for node 1
OVS kernel: [253358.283746] o2dlm: Node 0 (me) is the Recovery Master for the dead node 1 in domain 0004FB00000500008E8F542DC8E1XXXX
OVS kernel: [253358.283852] o2dlm: End recovery on domain 0004FB00000500008E8F542DC8E1XXXX
OVS kernel: [253359.349697] o2dlm: Begin recovery on domain ovm for node 1
OVS kernel: [253359.349724] o2dlm: Node 0 (me) is the Recovery Master for the dead node 1 in domain ovm
OVS kernel: [253359.349791] o2dlm: End recovery on domain ovm
OVS kernel: [345229.355019] blk_update_request: I/O error, dev loop0, sector 0
OVS kernel: [345229.378320] blk_update_request: I/O error, dev loop0, sector 0
OVS kernel: [345346.050665] blk_update_request: I/O error, dev loop0, sector 0
OVS kernel: [345346.056131] blk_update_request: I/O error, dev loop0, sector 0
OVS kernel: [345381.569701] nr_pdflush_threads exported in /proc is scheduled for removal

cras64>dmesg
------
[ 1707.661926] Stack:
[ 1707.662188] ffff8801786abd78 ffff8801786abd78 ffff88015d03ca60 ffff88015d03ca70
[ 1707.662898] ffff880187b51880 ffff880187b51880 ffff8801786abdf8 ffffffff816efc49
[ 1707.663626] 00000000ca100104 0000000000000000 0000000000000000 00000700000000a8
[ 1707.664340] Call Trace:
[ 1707.664610] [<ffffffff816efc49>] ? schedule_timeout+0x169/0x2d0
[ 1707.664887] [<ffffffffa065dbd8>] xen_blkif_schedule+0x118/0x860 [xen_blkback]
[ 1707.665371] [<ffffffff816ebeea>] ? __schedule+0x24a/0x810
[ 1707.665649] [<ffffffff810afae1>] ? finish_task_switch+0x81/0x1d0
[ 1707.665926] [<ffffffff816ebeea>] ? __schedule+0x24a/0x810
[ 1707.666206] [<ffffffff810cc2a0>] ? wait_woken+0x90/0x90
[ 1707.666494] [<ffffffffa065dac0>] ? xen_blkif_be_int+0x30/0x30 [xen_blkback]
[ 1707.666788] [<ffffffff810a68fb>] kthread+0xcb/0xf0
[ 1707.667059] [<ffffffff816ebeea>] ? __schedule+0x24a/0x810
[ 1707.667333] [<ffffffff816ebeea>] ? __schedule+0x24a/0x810
[ 1707.667608] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
[ 1707.667886] [<ffffffff816f2081>] ret_from_fork+0x61/0x90
[ 1707.668162] [<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
[ 1707.668439] Code: 47 08 41 3b 47 10 0f 82 b5 fd ff ff 0f 1f 00 31 c0 48 81 c4 b8 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 1f 40 00 3c 06 74 b1 <0f> 0b 66 2e 0f 1f 84 00 00 00 00 00 80 f9 05 41 be 01 00 00 00
[ 1707.672956] RIP [<ffffffffa065d204>] __do_block_io_op+0x2d4/0x8d0 [xen_blkback]
[ 1707.673492] RSP <ffff8801786abd18>
[ 1707.673775] ---[ end trace 5092349ce73915eb ]---
[ 1707.674073] Kernel panic - not syncing: Fatal exception
[ 1707.674438] Kernel Offset: disabled

Changes

 No Changes.

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.