cancel
Showing results for 
Search instead for 
Did you mean: 

vxfen module cause sles 11 sp1 kernel panic

Jamesb_china
Level 3

Hi all: 

I use three share disks on iscsi(target using LIO) as the I/O fence disk in VCS 6.0 on SLES11 SP1. After configuration, the kernel panic when starting vxfen. Log is here:

 

[   83.640266] BUG: unable to handle kernel NULL pointer dereference at 0000000000000060
[   83.641248] IP: [<ffffffff81396113>] down_read+0x3/0x10
[   83.641933] PGD 37b62067 PUD 3c7b2067 PMD 0 
[   83.642512] Oops: 0002 [#1] SMP 
[   83.643016] last sysfs file: /sys/devices/platform/host2/iscsi_host/host2/initiatorname
[   83.644009] CPU 0 
[   83.644054] Modules linked in: vxfen(PN) dmpalua(PN) vxspec(PN) vxio(PN) vxdmp(PN) snd_pcm_oss snd_mixer_oss snd_seq snd_seq_device snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd soundcore snd_page_alloc gab(PN) ipv6 crc32c iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi af_packet llt(PN) amf(PN) microcode fuse loop fdd(PN) exportfs vxportal(PN) vxfs(PN) dm_mod virtio_blk virtio_balloon virtio_net sg rtc_cmos rtc_core rtc_lib tpm_tis button tpm tpm_bios floppy i2c_piix4 virtio_pci virtio_ring pcspkr i2c_core virtio uhci_hcd ehci_hcd sd_mod crc_t10dif usbcore edd ext3 mbcache jbd fan processor ide_pci_generic piix ide_core ata_generic ata_piix libata scsi_mod thermal thermal_sys hwmon
[   83.644054] Supported: Yes, External
[   83.644054] Pid: 4670, comm: vxfen Tainted: P             2.6.32.12-0.7-default #1 Bochs
[   83.644054] RIP: 0010:[<ffffffff81396113>]  [<ffffffff81396113>] down_read+0x3/0x10
[   83.644054] RSP: 0018:ffff88002e831638  EFLAGS: 00010286
[   83.644054] RAX: 0000000000000060 RBX: 0000000000000000 RCX: ffff88003c4e2480
[   83.644054] RDX: 0000000000000001 RSI: 0000000000002000 RDI: 0000000000000060
[   83.644054] RBP: ffff88002c9aa000 R08: 0000000000000000 R09: ffff88003c4e2480
[   83.644054] R10: ffff88003d6da140 R11: 00000000000000d0 R12: 0000000000000060
[   83.644054] R13: ffff88002c9a8000 R14: 0000000000000000 R15: ffff88002c9a8001
[   83.644054] FS:  0000000000000000(0000) GS:ffff880006200000(0000) knlGS:0000000000000000
[   83.644054] CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
[   83.644054] CR2: 0000000000000060 CR3: 000000003c72a000 CR4: 00000000000006f0
[   83.644054] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   83.644054] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   83.644054] Process vxfen (pid: 4670, threadinfo ffff88002e830000, task ffff88002c8e8140)
[   83.644054] Stack:
[   83.644054]  ffffffff810325ab 0000000000000000 0000001000000000 ffff88003ca9d690
[   83.644054] <0> ffff88003c4e2480 000000013d0181c0 0000000000000002 0000000000000002
[   83.644054] <0> 0000000000000002 0000000000001010 0000000000000000 0000000000000000
[   83.644054] Call Trace:
[   83.644054]  [<ffffffff810325ab>] get_user_pages_fast+0x11b/0x1a0
[   83.644054]  [<ffffffff81127c27>] __bio_map_user_iov+0x167/0x2a0
[   83.644054]  [<ffffffff81127d69>] bio_map_user_iov+0x9/0x30
[   83.644054]  [<ffffffff81127dac>] bio_map_user+0x1c/0x30
[   83.644054]  [<ffffffff811bab91>] __blk_rq_map_user+0x111/0x140
[   83.644054]  [<ffffffff811bacc5>] blk_rq_map_user+0x105/0x190
[   83.644054]  [<ffffffff811bebe7>] sg_io+0x3c7/0x3e0
[   83.644054]  [<ffffffff811bf1ec>] scsi_cmd_ioctl+0x2ac/0x470
[   83.644054]  [<ffffffffa01b75b1>] sd_ioctl+0xa1/0x120 [sd_mod]
[   83.644054]  [<ffffffffa0ccaa93>] vxfen_ioctl_by_bdev+0xc3/0xd0 [vxfen]
[   83.644054]  [<ffffffffa0ccb6ac>] vxfen_ioc_kernel_scsi_ioctl+0xec/0x3c0 [vxfen]
[   83.644054]  [<ffffffffa0ccbe1f>] vxfen_lnx_pgr_in+0xff/0x380 [vxfen]
[   83.644054]  [<ffffffffa0ccc11a>] vxfen_plat_pgr_in+0x7a/0x1c0 [vxfen]
[   83.644054]  [<ffffffffa0cd26c3>] vxfen_readkeys+0xa3/0x380 [vxfen]
[   83.644054]  [<ffffffffa0cd3514>] vxfen_membreg+0x84/0xae0 [vxfen]
[   83.644054]  [<ffffffffa0cce6d6>] vxfen_preexist_split_brain_scsi3+0x96/0x2d0 [vxfen]
[   83.644054]  [<ffffffffa0ccf96d>] vxfen_reg_coord_disk+0x7d/0x660 [vxfen]
[   83.644054]  [<ffffffffa0ca5e0b>] vxfen_reg_coord_pt+0xfb/0x250 [vxfen]
[   83.644054]  [<ffffffffa0cb849c>] vxfen_handle_local_config_done+0x14c/0x8d0 [vxfen]
[   83.644054]  [<ffffffffa0cbad57>] vxfen_vrfsm_cback+0xad7/0x17b0 [vxfen]
[   83.644054]  [<ffffffffa0cd5b20>] vrfsm_step+0x1b0/0x3b0 [vxfen]
[   83.644054]  [<ffffffffa0cd7e1c>] vrfsm_recv_thread+0x32c/0x970 [vxfen]
[   83.644054]  [<ffffffffa0cd85b4>] vxplat_lx_thread_base+0xa4/0x100 [vxfen]
[   83.644054]  [<ffffffff81003fba>] child_rip+0xa/0x20
[   83.644054] Code: 48 85 f6 74 0f 48 89 e7 e8 5b 09 cd ff 85 c0 48 63 d0 7e 07 48 c7 c2 fc fd ff ff 48 83 c4 78 48 89 d0 5b 5d c3 00 00 00 48 89 f8 <3e> 48 ff 00 79 05 e8 52 fa e4 ff c3 90 48 89 f8 48 ba 01 00 00 
[   83.644054] RIP  [<ffffffff81396113>] down_read+0x3/0x10
[   83.644054]  RSP <ffff88002e831638>
[   83.644054] CR2: 0000000000000060
 
 
vcs1:~ # vxdisk  list
DEVICE       TYPE            DISK         GROUP        STATUS
aluadisk0_1  auto:cdsdisk    -            -            online
aluadisk0_2  auto:cdsdisk    -            -            online
aluadisk0_3  auto:cdsdisk    -            -            online
sda          auto:none       -            -            online invalid
vda          simple          vda          data_dg      online
vdb          simple          vdb          data_dg      online
 
vcs1:~ # cat /proc/partitions 
major minor  #blocks  name
 
   8        0    8388608 sda
   8        1    7333641 sda1
   8        2    1052257 sda2
 253        0   10485760 vda
 253       16    4194304 vdb
   8       16    1048576 sdb
   8       19    1046528 sdb3
   8       24    1046528 sdb8
   8       32    1048576 sdc
   8       35    1046528 sdc3
   8       40    1046528 sdc8
   8       48    1048576 sdd
   8       51    1046528 sdd3
   8       56    1046528 sdd8
 201        0    8388608 VxDMP1
 201        1    7333641 VxDMP1p1
 201        2    1052257 VxDMP1p2
 201       16    1048576 VxDMP2
 201       19    1046528 VxDMP2p3
 201       24    1046528 VxDMP2p8
 201       32    1048576 VxDMP3
 201       35    1046528 VxDMP3p3
 201       40    1046528 VxDMP3p8
 201       48    1048576 VxDMP4
 201       51    1046528 VxDMP4p3
 201       56    1046528 VxDMP4p8
 199     6000    8388608 VxVM6000
 199     6001     153600 VxVM6001
 
 
 
LIO Target server message:
 
 
[60529.780169] br0: port 3(vnet2) entered forwarding state
[60675.372132] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
[60675.372217] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
[60675.373337] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
[60675.373426] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
[60675.374304] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
[60675.374374] TARGET_CORE[iSCSI]: Unsupported SCSI Opcode 0x24, sending CHECK_CONDITION.
 
 

All test have passed by 

vcs1:~ # vxfentsthdw -m 
 
Veritas vxfentsthdw version 6.0.000.000-GA Linux
 
 
The utility vxfentsthdw works on the two nodes of the cluster.
The utility verifies that the shared storage one intends to use is
configured to support I/O fencing.  It issues a series of vxfenadm
commands to setup SCSI-3 registrations on the disk, verifies the
registrations on the disk, and removes the registrations from the disk.
 
 
******** WARNING!!!!!!!! ********
 
THIS UTILITY WILL DESTROY THE DATA ON THE DISK!! 
 
Do you still want to continue : [y/n] (default: n) y
The logfile generated for vxfentsthdw is /var/VRTSvcs/log/vxfen/vxfentsthdw.log.9431
 
Enter the first node of the cluster:
vcs1
Enter the second node of the cluster:
vcs2
 
Enter the disk name to be checked for SCSI-3 PGR on node vcs1 in the format:
for dmp: /dev/vx/rdmp/sdx
for raw: /dev/sdx
Make sure it is the same disk as seen by nodes vcs1 and vcs2
/dev/sdb
 
Enter the disk name to be checked for SCSI-3 PGR on node vcs2 in the format:
for dmp: /dev/vx/rdmp/sdx
for raw: /dev/sdx
Make sure it is the same disk as seen by nodes vcs1 and vcs2
/dev/sdb
 
***************************************************************************
 
Testing vcs1 /dev/sdb vcs2 /dev/sdb
 
Evaluate the disk before testing  ........................ No Pre-existing keys
RegisterIgnoreKeys on disk /dev/sdb from node vcs1 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs2 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs2 .................... Passed
Unregister keys on disk /dev/sdb from node vcs1 ........................ Passed
Verify registrations for disk /dev/sdb on node vcs2 .................... Passed
Unregister keys on disk /dev/sdb from node vcs2 ........................ Passed
Check to verify there are no keys from node vcs1 ....................... Passed
Check to verify there are no keys from node vcs2 ....................... Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs1 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
Read from disk /dev/sdb on node vcs1 ................................... Passed
Write to disk /dev/sdb from node vcs1 .................................. Passed
Read from disk /dev/sdb on node vcs2 ................................... Passed
Write to disk /dev/sdb from node vcs2 .................................. Passed
Reserve disk /dev/sdb from node vcs1 ................................... Passed
Verify reservation for disk /dev/sdb on node vcs1 ...................... Passed
Read from disk /dev/sdb on node vcs1 ................................... Passed
Read from disk /dev/sdb on node vcs2 ................................... Passed
Write to disk /dev/sdb from node vcs1 .................................. Passed
Expect no writes for disk /dev/sdb on node vcs2 ........................ Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs2 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
Verify registrations for disk /dev/sdb on node vcs2 .................... Passed
Write to disk /dev/sdb from node vcs1 .................................. Passed
Write to disk /dev/sdb from node vcs2 .................................. Passed
Preempt and abort key KeyA using key KeyB on node vcs2 ................. Passed
Test to see if I/O on node vcs1 terminated ............................. Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs1 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
Preempt key KeyC using key KeyB on node vcs2 ........................... Passed
Test to see if I/O on node vcs1 terminated ............................. Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
Verify registrations for disk /dev/sdb on node vcs2 .................... Passed
Verify reservation for disk /dev/sdb on node vcs1 ...................... Passed
Verify reservation for disk /dev/sdb on node vcs2 ...................... Passed
Remove key KeyB on node vcs2 ........................................... Passed
Check to verify there are no keys from node vcs1 ....................... Passed
Check to verify there are no keys from node vcs2 ....................... Passed
Check to verify there are no reservations on disk /dev/sdb from node vcs1  Passed
Check to verify there are no reservations on disk /dev/sdb from node vcs2  Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs1 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
RegisterIgnoreKeys on disk /dev/sdb from node vcs1 ..................... Passed
Verify registrations for disk /dev/sdb on node vcs1 .................... Passed
Clear PGR on node vcs1 ................................................. Passed
Check to verify there are no keys from node vcs1 ....................... Passed
 
ALL tests on the disk /dev/sdb have PASSED.
The disk is now ready to be configured for I/O Fencing on node vcs1.
 
ALL tests on the disk /dev/sdb have PASSED.
The disk is now ready to be configured for I/O Fencing on node vcs2.
 
Removing test keys and temporary files, if any...
 
 

 

 

 

 

0 REPLIES 0