DMP, MPIO, MSDSM, SCSI-3 and ALUA configuration settings
Ok, I'm somewhat confused and the more I read the more confused I think I'm getting. I'm going to be setting up a 4 node active/active cluster for SQL. All of the nodes will have 2 seperate fiber channel HBAs connecting through 2 seperate switches to our NetApp. The NetApp supports ALUA, so the storage guy wants to use it. It is my understanding that I need to use SCSI-3 to get this to work. Sounds good to me so far. My question is, do I need to use any of Microsoft's MPIO or MSDSM? This is on Win 2008 R2. Or does Veritas take care of all of that? Also, I read that in a new cluster set up, only connect 1 path first and then install and then connect the 2nd path and let Veritas detect it and configure it. Is that accurate? Any info or directions you can point me will be greatly appreciated. Thanks!SolvedSFHACFS vs Hitachi-VSP & Host mode option 22
Hi! I'm installing several new SFHACFS clusters and during the failover testing I run into annoying problem - when I fence one node of the cluster, DMP logs high number of path down/path up events which in the end causes the disk to disconnect even on other active nodes. We've found out that our disks were exported without the host mode option 22, so we fixed this on storage. Even after this the clusters bahaved the same. Later I've read somewhere on internet, that it's good idea to relabel the disks, so I've requested new disks from storage and did vxevac to new disks. This fixed just two clusters we have, but the other two are behaving still the same. Have anybody experienced anything similar? Do you know anything I can test / check on the servers to determine the difference? The only difference between the enviroment is that the not-working clusters have the disks mirrored from two storage systems, while the working ones have the data disks only from one storage.SolvedSF4.1 VxDMP disables dmpnode on single path failure
This is more like an informational question, since I do not assume anyone has a solution, but just in case I would be thankfull for some enlightment: I am forced to use an old Version SF4.1MP4 in this case on Linux SLES9. For whatever reason DMP does not work with the JBOD I have added. The JBOD (Promise VTrak 610fD) is ALUA, so half of all the available paths are always standy and that is ok. But the DMP in 4.1 when seeing one of 4 paths not working diables the whole DMP Node, rendering the disk unusable: Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-148 i/o error occured on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-148 i/o error anal ysis done on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-0 SCSI error opcode=0x28 returned status=0x1 key=0x2 asc=0x4 ascq=0xb on path 8/0x70 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x50 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x70 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x10 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x30 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-111 disabled dmpnode 201/0x10 Jun 12 15:35:01 kernel: Buffer I/O error on device VxDMP2, logical block 0 Currently my only solutions seems to stick with Linux DM-Multipathing an add the disks as foreign devices.SolvedVxDMP on top of RDAC/MPP
Hiya, I have a setup that I need to 'fix'... Currently it appears the servers have RDAC/MPP as the primary multipathing , however for what ever reason, it also has Veritas Volume Manager sitting atop of this! MPP is dealing with the pathing and presenting VxDMP with one 'virtual' path. Below is a vxdisk list example. Device: disk_0 devicetag: disk_0 type: auto clusterid: x_1 disk: name=x1-1 id=x2 group: name=xdg id=x1 info: format=cdsdisk,privoffset=256,pubslice=3,privslice=3 flags: online ready private autoconfig shared autoimport imported pubpaths: block=/dev/vx/dmp/disk_0s3 char=/dev/vx/rdmp/disk_0s3 guid: - udid: IBM%5FVirtualDisk%5FDISKS%5F600A0B80006E0140000035F24C090009 site: - version: 3.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=3 offset=65792 len=1140661648 disk_offset=0 private: slice=3 offset=256 len=65536 disk_offset=0 update: time=1299520130 seqno=0.29 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=51360 logs: count=1 len=4096 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-051423[051168]: copy=01 offset=000192 enabled log priv 051424-055519[004096]: copy=01 offset=000000 enabled lockrgn priv 055520-055663[000144]: part=00 offset=000000 Multipathing information: numpaths: 1 sdb state=enabled I going to be removing the MPP layer to allow VxDMP to take over, however my question is how will this effect my VxVM config? I'm hoping VxVM upon reboot will pickup the new paths etc, but will this effect in any way the DG's that sit on top of this? Note these are shared luns across a cluster. Do I need to worry about the dmppolicy+disk.info files etc ? I hope the question makes sense! ThanksSolved