Doubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentStorage foundation linux - increase lun
Hi All, I am new on storage foundation solution, and also on the company. I have a oracle database running storage foundation and i just need increase a database lun. Do i need to do something on storage foundation configuration? or can i just increase the size by storage side and reboot my linux? The luns is one of resources that storage foundation manage. Thank youSolved2.2KViews0likes3CommentsSolaris OS compatibility for SFHA Version 4.1
Hi All, Arch=Sparc OS=Solaris 10 Release=3/05 Initial Release SFHA=VCS,VxVM,VxFS Version=4.1 I'm planning to upgrade Solaris host from Initial Release to Latest Release (Solaris 10 1/13 Update 11). Also the host is running VxVM,VxFS & VCS everything verson 4.1 After Solaris Upgrade to latest release will SFHA 4.1 work fine. Will VCS Service Groups come online and VxFS file systems get mounted whilerunning 4.1 ? Kindly note that only for testing purpose i want to know this. Once Solaris is upgraded I will also upgrade SFHA to version 5.1SP1RP4 or to version 6. Reply is highly appreciated. Danish.Solved1.1KViews0likes2CommentsHP 3PAR StorServ 7000/10000 and VxDMP 3.5 / 4
I was wondering if anyone has experience using HP 3PAR StorServ 7000 / 10000 (V Class) with VxDMP 3.5, or 4. It seems that these arrays are supported out of the box with 5 and 6. In previous versions, thelibvx3par ASL is available or embedded, but only supports E/F/S/T class 3PAR arrays. Although I chose Solaris for the OS, this is not an OS specific question. Does anyone know of updated support information or have experiences with these integrations? Thank you!Solved1KViews1like3CommentsApache monitor in Linux RHEL 6.2 with SF 5.1SP1
Upgrading Linux RHEL 6.1 - using sfha-rhel6_x86_64-5.1SP1PR2RP2 on a VCS cluster, has a problem with the Apache monitor script, I checked in the bundled agents guide for a debug mode. The Apache mount point is a shared resource and started by VCS after the CreateMntPt has verified the mount point does not exist. The default value for the CreateMntPt is 0 but it seems that under VCS , the file system should not exist, therfore it should be set to 2. The engine_A.log is reporting on both nodes about the filesystem does not exist, how do I change the monitor script to stop checking for the file system on both nodes at the same time?622Views0likes3Commentsmirroring plexes across arrays
Are there any cons to mirroring plexes for a single volume across two different arrays for redundancy? I'm familar with the process of mirroring plexes and use it often to migrate a volume from one lun to another lun but was unsure what was the longterm effect of mirroring across arrays.Solved806Views0likes2CommentsDisk Failure in Shared Disk Group configuration!!!
Hi Experts, If we create a shared disk group with detach policy as 'local' and fail policy as 'leave', what will be the consequences if a disk on which a mirrored plex is situated, fails or corrupts fully. That is, the disk now is wiped out. I have seen that, If the policies are global and dgdisable respec., the disks is replaced. I think relocd has played a role in this. Whether there is role played by relocd in the previous scenario? ThanksSolved545Views0likes1Comment