cancel
Showing results for 
Search instead for 
Did you mean: 

Puredisk 6.6 HA Cluster Install: LUN recognized as External instead of Shared

B_LAURENT
Level 2

Hi all,

I'm trying to re-install from scratch "PureDisk in High Availability" (Clustered mode).

I get benefit of this Maintenance plan to add new volumetry (new LUNS).

Currently, all my nodes have PDOS (6.6.0.308) well installed.

But I get an issue: One of my LUNs is recognized as "External Storage" while the other are as "Shared Storage".

So, I can't create any diskgroup and so go ahead with the "VCS Cluster configuration" due to this issue.

I wonder if there're any limitation due to LUN.

Also, I wonder if there's an issue on this particular LUN which belongs to previous LUNS (already presented to nodes).

# FYI: Hardware details:

  • Nodes: IBM Blade Center HS22
  • SAN Array: IBM DS3400

 

Could anyone have an idea?

Any help would be useful, Thanks in advance.

3 REPLIES 3

AAlmroth
Level 6
Partner Accredited

The other LUNs that is marked as shared, are they the old LUNs? If so, they probably got the old VxVM config in the private region

The new LUN; can all nodes see it? If not all nodes can see the LUN, it may be marked as external or DAS instead.

The new LUN: Have you tried to add the disk manually in CLI with vxdiskadm? Sometimes, the PD installer needs a "gentle" push, to correctly read the LUN configuration.

I usually end up with doing the volume group/volumes/mounts manually, when the PD installer can't figure it out. Then in the installer it will automaticall detect all correctly...

/A

B_LAURENT
Level 2

Hi Almoroth,

There’s a LUN well indentified that can’t be configured as “shared” storage.
It’s working fine for the others.
We perfomed a "writting test" (using "dd" command to fill the logical device) on that device, but no errors occured.

There maybe something in PureDisk, like a flag or else that puts that logical device in this status.

So, we should continue install, and configure VCS through "Web  Installer" without the possibility to inclue this LUN.
Or I can as suggested (I already thought about it) proceed manually by creating each "diskgroup" and add disks I want.

All LUNs are seen correctly by all of the nodes when checking:
 - /proc/partitions (/dev/sd<x> and VxVMP...)
 - fdisk
 - vxdisk list, vxdisk path...etc.

Almoroth, could you please provide me the syntax you use to create diskgroup?
Is there a way to do it with PureDisk commands, or is using "vxdg"?

Thanks in advance for your reply.

/Bruno
 

AAlmroth
Level 6
Partner Accredited

Hi,

The easiest method is to use vxdiskadm on one node, and either add the new disk to an existing disk group, or if you have more nodes (new cr/mb role for instance), then create a new disk group for that virtual role.

E.g. say you have three nodes in a N+1, you would need two disk groups; one for SPA/MBS/CR/MB, and one for CR/MB. The third node will be standby node. You have, say, two disks, one for each role, and now want to add a new disk to one of the disk groups. In your case, webGUI installer can't, for some reason, handle this.

You can log on to one node (SSH), and run vxdiskadm, add the new disk to one of the disk groups. In vxprint, you should see it, but not being used.

To verify proper cluster functionality of this disk, use vxdg to deport/import the disk group on each node, one at the time. If VxVM does not complain, then it should work fine.

Another thing that could be the problem with the new LUN is if one of the nodes has enumerated it wrong, so say, on two nodes the disk is called sdg, but on one node it is called sde. the web installer may get confused here. Once the LUN is in a disk group, it should be able to find out that the disk is shared.

Also, if the LUN comes from another array, it may not be identified the same way as the other previous LUNs.

 

Please provide the output from vxdisk list, vxprint, vxdmpadm, then we will be able to see the state.

 

/A