Announcing Storage Foundation High Availability Customer Forum 2012
We are excited to announce Storage Foundation High Availability Customer Forum 2012, a free learning event by the engineers for the engineers. Registration is open, register now to get on the priority list. The forum will take place at our Mountain View, CA offices on March 14th and 15th 2012. Join your peers and our engineers for two days of learning and knowledge sharing. The event features highly technical sessions tohelp you get more out of your days. Learn and share best practices from your peers in the industry and build a long lasting support network in the process Become a Power User by significantly increasing your troubleshooting and diagnostic skills as well as your product knowledge Engage with the engineers who architected and wrote the code for the products Please see here for event agenda and sessions details. More questions? See our events page.677Views1like0CommentsI do not understand the situation arose
Hi Friends I do not understand the situation arose All i/o channels active stated. san-switch device firmware upgrades. Was a sudden san-switch reboot . the san-swtich of the i/o channels disable. the normal state of other channels For a while, but the service was not 30 second. I wonder why this happened. I think, 30 seconds as the service stops, the dmp reforming. Tell me, what was the other possibility. thanks Operating system: HPUX Operating system version: 11.31 Architecture: ia64 Server model: ia64 hp superdome server SD32B veritas version : CFS5.0 (vxvm 5.0RP6, vxfs5.0RP6HF1) Switch : BROCADE SilkWorm 48000 storage : EMC dmx series syslog messages. ul 11 14:41:28 eaiap1p vmunix: 0/0/14/1/0: Fibre Channel Driver received Link Dead Notification. Jul 11 14:41:28 eaiap1p vmunix: Jul 11 14:41:28 eaiap1p vmunix: 2/0/14/1/0: Fibre Channel Driver received Link Dead Notification. Jul 11 14:41:28 eaiap1p vmunix: class : tgtpath, instance 5 Jul 11 14:41:28 eaiap1p vmunix: Target path (class=tgtpath, instance=5) has gone offline. The target path h/w path is 0/0/14/1/0.0 x5006048449af3675 Jul 11 14:41:28 eaiap1p vmunix: class : tgtpath, instance 4 Jul 11 14:41:28 eaiap1p vmunix: Target path (class=tgtpath, instance=4) has gone offline. The target path h/w path is 2/0/14/1/0.0 x5006048449af3676 ....skip Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x1c0 belonging to the dmpnode 5/0xc0 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x1d0 belonging to the dmpnode 5/0x90 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x190 belonging to the dmpnode 5/0x200 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x290 belonging to the dmpnode 5/0x80 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x260 belonging to the dmpnode 5/0x1a0 ....skip Jul 11 14:44:57 eaiap1p vmunix: Target path (class=tgtpath, instance=4) has gone online. The target path h/w path is 2/0/14/1/0.0x 5006048449af3676 Jul 11 14:45:17 eaiap1p vmunix: class : tgtpath, instance 5 Jul 11 14:44:57 eaiap1p vmunix: Jul 11 14:45:17 eaiap1p above message repeats 11 times Jul 11 14:45:17 eaiap1p vmunix: Target path (class=tgtpath, instance=5) has gone online. The target path h/w path is 0/0/14/1/0.0x 5006048449af3675Solved1.1KViews0likes2CommentsMove Disks from one EMC array to another array having VxVM DG's
Hi All, There is activity of Data migration to our New EMC Symm, so, the taks is: We have a Emc disks used with vxvm with in the existing cluster, now we are migrating this emc disks to new EMC, so I need some finetune on my workplan, 1. take complete backup of vxvm config. 2. using vxdg deport, we will deport all the dg's 3. EMC will move the disks/replicate tehdisks adn present them to host. 4. scan for the disks. and using vxdg importdg 5. fsck and mount. Please let me know if there are any other things which needs to be taken care, Just to provide more info. the cluste is oracle10g OPS parellel cluster. Thanks & regards, Govinda.Solved2.1KViews0likes6Comments