On a BDA X6-2 Server a Slave Bond Link Status Repeatedly Fails with "link status definitely down for interface"/"link status definitely up for interface" (Doc ID 2251331.1)

Last updated on APRIL 15, 2017

Applies to:

Big Data Appliance X6-2 Hardware - Version All Versions and later
Linux x86-64

Symptoms

1. One BDA X6-2 server in a cluster shows a slave bond link repeatedly failing. Output looks like:

<timestamp> bdanodex kernel: bonding: bondeth1: link status definitely down for interface eth11, disabling it
<timestamp> bdanodex kernel: bonding: bondeth1: making interface eth10 the new active one.
<timestamp> bdanodex kernel: bonding: bondeth1: link status definitely up for interface eth11.
<timestamp> bdanodex kernel: bonding: bondeth1: making interface eth11 the new active one.
...<repeated over and over>...

2. Comparing the bondeth1 configuration on the node with failing link status with a "healthy" node shows that the BONDING_OPTS options are different.

a) On the node with the failing link status /etc/sysconfig/network-scripts/ifcfg-bondeth1 shows BONDING_OPTS to be:

BONDING_OPTS="mode=active-backup fail_over_mac=active arp_interval=100 arp_ip_target=<ip> primary=eth11"

b) On a "healthy" node /etc/sysconfig/network-scripts/ifcfg-bondeth1 shows BONDING_OPTS to be:

BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 primary=eth11"

 

Cause

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms