Forum Discussion

amity11's avatar
amity11
Level 3
6 years ago

After creating new VCS setup, not all disks are not shown in veritas volume manger controlled

Hi
I have created a new setup for VCS for doing some testing on virtual box. After creating 3 Solaris 10 machine (147148-26) with VCS. Here I have used one machine as ISCSI Storage.
VCS Version is
Code:
bash-3.2# /opt/VRTS/bin/haclus -value EngineVersion
6.0.10.0
bash-3.2#
bash-3.2# pkginfo -l VRTSvcs
   PKGINST:  VRTSvcs
      NAME:  Veritas Cluster Server by Symantec
  CATEGORY:  system
      ARCH:  i386
   VERSION:  6.0.100.000
   BASEDIR:  /
    VENDOR:  Symantec Corporation
      DESC:  Veritas Cluster Server by Symantec
    PSTAMP:  6.0.100.000-GA-2012-07-20-16.30.01
  INSTDATE:  Nov 06 2019 19:40
    STATUS:  completely installed
     FILES:      278 installed pathnames
                  26 shared pathnames
                  56 directories
                 116 executables
              466645 blocks used (approx)
bash-3.2#
First VCS Node info is
Code:
bash-3.2# echo |format
Searching for disks...
Inquiry failed for this logical diskdone

AVAILABLE DISK SELECTIONS:
       0. c0d0 <▒x▒▒▒▒▒▒▒▒▒@▒▒▒ cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# uname -a
SunOS node1 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2823A00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#
For another Node
Code:
bash-3.2# echo|format
Searching for disks...
Inquiry failed for this logical diskdone

AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN    -SOLARIS        -1   cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#
One OS disk and one LUN is showing in vxdisk command. Further both LUNs ids are different on both nodes which are showing in vxdisk output. I have ran "vxdisk enable" , "vxdisk scandisks" , devfsadm and even taken all the reconfiguration reboot multiple time even then I didn't get all the disk are shown in vxdisk list command. How can I make all the disk visible in vxdisk list command
 
  • Further I have label all the disk to SMI label and also enable "set all mode pages to default" and also took the reconfiguration reboot and still the issue is same
    On Node1
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -      
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#
    bash-3.2#
    On Node2
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#
     
    Note : This time LUN id is same on both nodes when we run the
    Code:
    vxdisk -e list
    command and also this LUN id is different from previous two LUN ids in my previous output
    Just now I have noticed that after taking reconfiguration reboot LUN id show in vxdisk command output are different
    On Node one
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825E00080027E84B7300d0s2 -      
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#

    On Node Two
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#

    again took a reboot the one **bleep** is from previous on one node and one disk is new on another node
    On node one
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -      
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#
    On another node
    Code:
    bash-3.2# vxdisk -e list
    DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
    aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281F100080027E84B7300d0s2 -
    c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
    bash-3.2#
    • amity11's avatar
      amity11
      Level 3
      I have run the mention command still the issue persist

      devfsadm -v -C -c disk
      vxdisk enable
      vxdisk scandisks

      On my first node output is after running above commands
      Code:
      bash-3.2# vxdisk -e list
      DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
      aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2821500080027E84B7300d0s2 -         
      c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
      bash-3.2#
      On my second Node output is after running above commands
      Code:
      bash-3.2# vxdisk -e list
      DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
      aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
      c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
      bash-3.2#
       
      • Apurv_Barve's avatar
        Apurv_Barve
        Level 3

        Hi,

        What make devices are you using?

        Post output of command "vxddladm list devices".

        Start vxconfigd in debug mode and provide the log file /tmp/logfile.  Run following commands:

        vxdctl debug 9 /tmp/logfile

        vxdisk enable

        vxdctl debug 0

    • amity11's avatar
      amity11
      Level 3
       
      Please find below output and attached logs
       
      bash-3.2#
      bash-3.2# vxdisk list
      DEVICE       TYPE            DISK         GROUP        STATUS
      aluadisk0_0  auto:none       -            -            online invalid
      c0d0s2       auto:ZFS        -            -            ZFS
      bash-3.2#
      bash-3.2#
      bash-3.2# vxddladm list devices
      DEVICE               TARGET-ID    STATE   DDL-STATUS (ASL)
      ===============================================================
      c0d0s2               -            Online  -
      c2t600144F05DC281F100080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2820900080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2821500080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2822B00080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2825400080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2823A00080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2825E00080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC281FF00080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      c2t600144F05DC2827000080027E84B7300d0s2 -            Online  CLAIMED (ALUA)
      bash-3.2#
      bash-3.2#
      bash-3.2# vxdctl debug 9 /tmp/logfile
      bash-3.2#
      bash-3.2# vxdisk enable
      VxVM vxdisk INFO V-5-1-9632
      vxdisk - define and manage Veritas Volume Manager disks

          Usage: vxdisk [-f] keyword arg ...
      Recognized keywords:
          [-g diskgroup] addregion region-type disk offset length
          [-g diskgroup] check disk ...
          checklvmdisk
          [-fd] classify [ctlr=...] [disk=...] [udid=...]
          [-g diskgroup] [-o clearkey=key] clearhost ...
          clearimport accessname ...
          define accessname [attribute ...]
          destroy accessname ...
          flush accessname ...
          getctlr accessname
          [-r] init accessname [attribute ...]
          [-g diskgroup] [-o alldgs] [-o listreserve] [-o thin | -o ssd] [-o fssize] [-x attr ...] [-bpeqsv] [-o tag         =name|~name[=value|~value] [-u h|unit] list [disk ...]
          [-g diskgroup] [-o tag=name|~name[=value|~value] listtag [disk ...]
          [-l filename] offline [accessname ...]
          [-a] [-l filename] online [accessname ...]
          path
          [-o full] [-o ssd | -o thin] reclaim <diskgroup> | <disk> | <enclosure> [[<diskgroup> | <disk> | <enclosur         e>] ...]
          [-g diskgroup] resize {accessname|medianame} [length=value]
          rm accessname ...
          [-g diskgroup] rmregion region-type disk offset [length]
          [-g diskgroup] rmtag <tagname>[=<tagvalue>] <disk>|encl:<enclosure> ...
          [-f] scandisks [ [!]device=...| [!]ctlr=...| [!]pctlr=...|new|fabric]
          [-g diskgroup] set disk [attribute ...]
          [-g diskgroup] settag <tagname>[=<tagvalue>] <disk>|encl:<enclosure> ...
          [-g diskgroup] updateudid disk ...
      bash-3.2#
      bash-3.2#
      bash-3.2# vxdctl enable
      bash-3.2#
      bash-3.2#
      bash-3.2# vxdctl debug 0
      bash-3.2#
      bash-3.2#
      bash-3.2#
      bash-3.2#
      bash-3.2# vxdiskadm
      Volume Manager Support Operations
      Menu: VolumeManager/Disk
       1      Add or initialize one or more disks
       2      Encapsulate one or more disks
       3      Remove a disk
       4      Remove a disk for replacement
       5      Replace a failed or removed disk
       6      Mirror volumes on a disk
       7      Move volumes from a disk
       8      Enable access to (import) a disk group
       9      Remove access to (deport) a disk group
       10     Enable (online) a disk device
       11     Disable (offline) a disk device
       12     Mark a disk as a spare for a disk group
       13     Turn off the spare flag on a disk
       14     Unrelocate subdisks back to a disk
       15     Exclude a disk from hot-relocation use
       16     Make a disk available for hot-relocation use
       17     Prevent multipathing/Suppress devices from VxVM's view
       18     Allow multipathing/Unsuppress devices from VxVM's view
       19     List currently suppressed/non-multipathed devices
       20     Change the disk naming scheme
       21     Get the newly connected/zoned disks in VxVM view
       22     Change/Display the default disk layouts
       23     Dynamic Reconfiguration Operations
       list   List disk information

       ?      Display help about menu
       ??     Display help about the menuing system
       q      Exit from menus
      Select an operation to perform: q
      Goodbye.
      bash-3.2#
      bash-3.2#
      bash-3.2#
      bash-3.2#
      bash-3.2# echo|format
      Searching for disks...
      Inquiry failed for this logical diskdone

      AVAILABLE DISK SELECTIONS:
             0. c0d0 <¦x¦¦¦¦¦¦¦¦¦@¦¦¦ cyl 5242 alt 2 hd 255 sec 63>
                /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
             1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
             2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
             3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
             4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
             5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1302 alt 2 hd 255 sec 63>
                /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
             6. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
             7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 2607 alt 2 hd 255 sec 63>
                /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
             8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1020 alt 2 hd 64 sec 32>
                /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
             9. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1 cyl 1302 alt 2 hd 255 sec 63>
                /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
      Specify disk (enter its number): Specify disk (enter its number):
      bash-3.2#
      bash-3.2#
      bash-3.2# vxdiskadm
      Volume Manager Support Operations
      Menu: VolumeManager/Disk
       1      Add or initialize one or more disks
       2      Encapsulate one or more disks
       3      Remove a disk
       4      Remove a disk for replacement
       5      Replace a failed or removed disk
       6      Mirror volumes on a disk
       7      Move volumes from a disk
       8      Enable access to (import) a disk group
       9      Remove access to (deport) a disk group
       10     Enable (online) a disk device
       11     Disable (offline) a disk device
       12     Mark a disk as a spare for a disk group
       13     Turn off the spare flag on a disk
       14     Unrelocate subdisks back to a disk
       15     Exclude a disk from hot-relocation use
       16     Make a disk available for hot-relocation use
       17     Prevent multipathing/Suppress devices from VxVM's view
       18     Allow multipathing/Unsuppress devices from VxVM's view
       19     List currently suppressed/non-multipathed devices
       20     Change the disk naming scheme
       21     Get the newly connected/zoned disks in VxVM view
       22     Change/Display the default disk layouts
       23     Dynamic Reconfiguration Operations
       list   List disk information

       ?      Display help about menu
       ??     Display help about the menuing system
       q      Exit from menus
      Select an operation to perform: 1
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC281F100080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC281FF00080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC2822B00080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC2823A00080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC2825E00080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.c2t600144F05DC2821500080027E84B7300d0
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC2827000080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      Add or initialize disks
      Menu: VolumeManager/Disk/AddDisks
        Use this operation to add one or more disks to a disk group.  You can
        add the selected disks to an existing disk group or to a new disk group
        that will be created as a part of the operation. The selected disks may
        also be added to a disk group as spares. Or they may be added as
        nohotuses to be excluded from hot-relocation use. The selected
        disks may also be initialized without adding them to a disk group
        leaving the disks available for use as replacement disks.
        More than one disk or pattern may be entered at the prompt.  Here are
        some disk selection examples:
        all:          all disks
        c3 c4t2:      all disks on both controller 3 and controller 4, target 2
        c3t4d2:       a single disk (in the c#t#d# naming scheme)
        xyz_0 :       a single disk (in the enclosure based naming scheme)
        xyz_ :        all disks on the enclosure whose name is xyz
      Select disk devices to add: [<pattern-list>,all,list,q,?] c2t600144F05DC2820900080027E84B7300d0
        No matching disks found.
      Hit RETURN to continue.
      • Apurv_Barve's avatar
        Apurv_Barve
        Level 3

        Hi,

        From the logs, it seems all disks are claimed under single DMP node and hence you are seeing just a single device.

        6012 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC281F100080027E84B7300d0s2, DMP node is aluadisk0_0
        6014 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2825400080027E84B7300d0s2, DMP node is aluadisk0_0
        6016 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2823A00080027E84B7300d0s2, DMP node is aluadisk0_0
        6018 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2821500080027E84B7300d0s2, DMP node is aluadisk0_0
        6020 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC281FF00080027E84B7300d0s2, DMP node is aluadisk0_0
        6022 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2820900080027E84B7300d0s2, DMP node is aluadisk0_0
        6024 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2822B00080027E84B7300d0s2, DMP node is aluadisk0_0
        6026 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2827000080027E84B7300d0s2, DMP node is aluadisk0_0
        6028 11/13 10:41:59: VxVM vxconfigd DEBUG V-5-1-14467 Disk is /dev/rdsk/c2t600144F05DC2825E00080027E84B7300d0s2, DMP node is aluadisk0_0

        DMP expects that each device generates unique identifier for it to be treated as a separate DMP node. Since these are some devices exposed thtough ISCSI, i guess they are not generating unique identifiers.

        Either use devices from InfoScale qualified HCL or if you know that each device can generate unique identifier, use "vxddladm addjbod" command to add disks as JBOD.