My Oracle Support Banner

Oracle VM: Hang With "mlx4_ib_tunnel_comp_worker at ffffffffa0435d50 [mlx4_ib]" (Doc ID 2163699.1)

Last updated on AUGUST 04, 2018

Applies to:

Oracle VM - Version 3.2.9 and later
Exalogic Elastic Cloud X5-2 Hardware - Version X5 to X5 [Release X5]
Information in this document applies to any platform.

Symptoms

This issue happens on Oracle Server X5-2 with OVM 3.2.x installed.  The impacted kernel is 2.6.39-400.277.1.el5uek. When the bug is triggered, a dom0 server gets hung.  In the KDUMP-generated vmcore file, entries similar to these can be seen:

 

crash64> bt -t -a
PID: 12039 TASK: ffff880179ad6080 CPU: 0 COMMAND: "kworker/u:2"
START: panic at ffffffff8106f5cb
[ffff8801752b7940] panic at ffffffff8106f5cb
[ffff8801752b7960] printk at ffffffff8107089c
[ffff8801752b79f0] __atomic_notifier_call_chain at ffffffff8150de32
[ffff8801752b7a00] atomic_notifier_call_chain at ffffffff8150de56
[ffff8801752b7a10] notify_die at ffffffff8150de8e
[ffff8801752b7a40] unknown_nmi_error at ffffffff8150b1af
[ffff8801752b7a60] default_do_nmi at ffffffff8150b378
[ffff8801752b7a80] do_nmi at ffffffff8150b41e
[ffff8801752b7ab0] nmi at ffffffff8150a810
[ffff8801752b7b10] xen_hypercall_sched_op at ffffffff810013aa
[ffff8801752b7b38] xen_hypercall_sched_op at ffffffff810013aa
[ffff8801752b7b78] xen_poll_irq_timeout at ffffffff812f9a10
[ffff8801752b7bb8] xen_poll_irq at ffffffff812f9a30
[ffff8801752b7bc8] xen_spin_lock_slow at ffffffff81012a39
[ffff8801752b7c18] xen_spin_lock at ffffffff81012b0a
[ffff8801752b7c48] _raw_spin_lock at ffffffff81509d5e
[ffff8801752b7c58] schedule_delayed at ffffffffa0445def [mlx4_ib]
[ffff8801752b7c98] mlx4_ib_multiplex_cm_handler at ffffffffa04467cd
[mlx4_ib]
[ffff8801752b7ce8] mlx4_ib_multiplex_mad at ffffffffa04356e4 [mlx4_ib]
[ffff8801752b7cf8] get_sw_cqe at ffffffffa04316f6 [mlx4_ib]
[ffff8801752b7d28] mlx4_ib_poll_one at ffffffffa043211d [mlx4_ib]
[ffff8801752b7d70] _raw_spin_unlock_irqrestore at ffffffff81509dfe
[ffff8801752b7d88] mlx4_ib_poll_cq at ffffffffa04329be [mlx4_ib]
[ffff8801752b7de8] mlx4_ib_tunnel_comp_worker at ffffffffa0435ddb [mlx4_ib]
[ffff8801752b7e28] wake_up_process at ffffffff81068e77
[ffff8801752b7e58] process_one_work at ffffffff8108c5e9
[ffff8801752b7e68] mlx4_ib_tunnel_comp_worker at ffffffffa0435d50 [mlx4_ib]
[ffff8801752b7ea8] worker_thread at ffffffff8108cf2a
[ffff8801752b7ed0] worker_thread at ffffffff8108ce60
[ffff8801752b7ee8] kthread at ffffffff81091507
[ffff8801752b7f48] kernel_thread_helper at ffffffff81513644
[ffff8801752b7f78] int_ret_from_sys_call at ffffffff81512743
[ffff8801752b7f80] retint_restore_args at ffffffff8150a2e1
[ffff8801752b7fd8] kernel_thread_helper at ffffffff81513640
.
PID: 346515 TASK: ffff88010ff04480 CPU: 1 COMMAND: "kworker/u:4"
START: __schedule at ffffffff81507882
.
[ffff8801a3a1dde8] mlx4_ib_tunnel_comp_worker at ffffffffa0435ddb [mlx4_ib]
[ffff8801a3a1de58] process_one_work at ffffffff8108c5e9
[ffff8801a3a1de68] mlx4_ib_tunnel_comp_worker at ffffffffa0435d50 [mlx4_ib]
[ffff8801a3a1dea8] worker_thread at ffffffff8108cf2a
[ffff8801a3a1ded0] worker_thread at ffffffff8108ce60
[ffff8801a3a1dee8] kthread at ffffffff81091507
[ffff8801a3a1df48] kernel_thread_helper at ffffffff81513644
[ffff8801a3a1df78] int_ret_from_sys_call at ffffffff81512743
[ffff8801a3a1df80] retint_restore_args at ffffffff8150a2e1
[ffff8801a3a1dfd8] kernel_thread_helper at ffffffff81513640
...

 

The call stack may also appear in this way:

crash64> bt -t -a
PID: 45898 TASK: ffff880208764200 CPU: 0 COMMAND: "devmon"
START: panic at ffffffff8106f5cb
[ffff880207f6da20] panic at ffffffff8106f5cb
[ffff880207f6da40] printk at ffffffff8107089c
[ffff880207f6dad0] __atomic_notifier_call_chain at ffffffff8150de32
[ffff880207f6dae0] atomic_notifier_call_chain at ffffffff8150de56
[ffff880207f6daf0] notify_die at ffffffff8150de8e
[ffff880207f6db20] unknown_nmi_error at ffffffff8150b1af
[ffff880207f6db40] default_do_nmi at ffffffff8150b378
[ffff880207f6db60] do_nmi at ffffffff8150b41e
[ffff880207f6db90] nmi at ffffffff8150a810
[ffff880207f6dc18] ixgbe_update_stats at ffffffffa0131ea1 [ixgbe]
[ffff880207f6dc90] ixgbe_get_stats at ffffffffa01359fd [ixgbe]
[ffff880207f6dcb0] dev_get_stats at ffffffff814395bf
[ffff880207f6dce0] dev_seq_printf_stats at ffffffff81439608
[ffff880207f6de30] dev_seq_show at ffffffff814396f4
[ffff880207f6de40] seq_read at ffffffff8118e1a9
[ffff880207f6de90] seq_read at ffffffff8118df00
[ffff880207f6deb0] proc_reg_read at ffffffff811ca5c6
[ffff880207f6df00] vfs_read at ffffffff8116cc7b
[ffff880207f6df40] sys_read at ffffffff8116d1d5
[ffff880207f6df80] system_call_fastpath at ffffffff81512522
RIP: 00007f22e48aa26b RSP: 00007f22e39c4e90 RFLAGS: 00000206
RAX: 0000000000000000 RBX: ffffffff81512522 RCX: 00007f22e48b820b
RDX: 0000000000000400 RSI: 00007f22e539d000 RDI: 0000000000000007
RBP: 00007f22e539d000 R8: 0000000000000001 R9: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000b314c0
R13: 0000000000da9730 R14: 000000000000001f R15: 0000000000b314c0
ORIG_RAX: 0000000000000000 CS: e033 SS: e02b

PID: 301458 TASK: ffff880029534340 CPU: 1 COMMAND: "kworker/u:2"
START: __schedule at ffffffff81507882
[ffff88006068bd88] mlx4_ib_poll_cq at ffffffffa04329be [mlx4_ib]
[ffff88006068bde8] mlx4_ib_tunnel_comp_worker at ffffffffa0435ddb [mlx4_ib]
[ffff88006068be58] process_one_work at ffffffff8108c5e9
[ffff88006068be68] mlx4_ib_tunnel_comp_worker at ffffffffa0435d50 [mlx4_ib]
[ffff88006068bea8] worker_thread at ffffffff8108cf2a
[ffff88006068bed0] worker_thread at ffffffff8108ce60
[ffff88006068bee8] kthread at ffffffff81091507
[ffff88006068bf48] kernel_thread_helper at ffffffff81513644
[ffff88006068bf78] int_ret_from_sys_call at ffffffff81512743
[ffff88006068bf80] retint_restore_args at ffffffff8150a2e1
[ffff88006068bfd8] kernel_thread_helper at ffffffff81513640

 

Changes

None

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.