Doubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1Commentclone disk group
Greetings, I need to migrate disk groups between hosts. The current aging server runs vxvm 5.x on solaris 10. The proposed work loads are to be taken on by a combination of Solaris 11 and Solaris 10 logocal domains, split between application and database. I'm not using vcs only vxvm. Thevxvm version on the new platform is 7.1. Due to limitations on the storage arrays, i cannot create clones on the array and map them to the new host. Does vxvm have a cloning mechanism ? Is there is a better approach to migrate the data across different vxvm versions and maintain a point of failback. I would like to maintain the the diskgroups separately until the cutover. The DG configs: # # app-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5169F app-disk0 app-dg 6000144000000010A00CB5581BC51699 app-disk1 app-dg 6000144000000010A00CB5581BC516A6 app-disk2 app-dg # # applocal-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5161A applocal-disk0 applocal-dg 6000144000000010A00CB5581BC51626 applocal-disk1 applocal-dg 6000144000000010A00CB5581BC51627 applocal-disk2 applocal-dg 6000144000000010A00CB5581BC51619 applocal-disk3 applocal-dg # # db_ora-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5161D db_ora-disk0 db_ora-dg 6000144000000010A00CB5581BC5161C db_ora-disk1 db_ora-dg 6000144000000010A00CB5581BC5161B db_ora-disk2 db_ora-dg 6000144000000010A00CB5581BC515F7 db_ora-disk3 db_ora-dg # # db_ora02-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC51714 db_ora02-disk1 db_ora02-dg 6000144000000010A00CB5581BC51719 db_ora02-disk2 db_ora02-dg 6000144000000010A00CB5581BC5170D db_ora02-disk3 db_ora02-dg cheers MB2.4KViews0likes3CommentsSolaris 11.1 VXVM 6.0.1 'df' cause a panic
Environment: System Configuration: HP ProLiant BL480c G1 Oracle Solaris 11.1 X86 panic string: BAD TRAP: type=e (#pf Page fault) rp=fffffffc816fdb90 addr=0 occurred in module "unix" due to a NULL pointer dereference Veritas INFO: PKGINST: VRTSvxvm NAME: Binaries for VERITAS Volume Manager by Symantec CATEGORY: system ARCH: i386 VERSION: 6.0.100.000,REV=08.01.2012.08.52 Stack: genunix: [ID 655072 kern.notice] fffffffc816fdab0 unix:die+105 () genunix: [ID 655072 kern.notice] fffffffc816fdb80 unix:trap+153e () genunix: [ID 655072 kern.notice] fffffffc816fdb90 unix:cmntrap+e6 () genunix: [ID 655072 kern.notice] fffffffc816fdca0 unix:strncpy+1c () genunix: [ID 655072 kern.notice] fffffffc816fdcd0 odm:odmstatvfs+90 () genunix: [ID 655072 kern.notice] fffffffc816fdcf0 genunix:fsop_statfs+1a () genunix: [ID 655072 kern.notice] fffffffc816fde70 genunix:cstatvfs64_32+42 () genunix: [ID 655072 kern.notice] fffffffc816fdec0 genunix:statvfs64_32+69 () genunix: [ID 655072 kern.notice] fffffffc816fdf10 unix:brand_sys_sysenter+1dc () Messages: unix: [ID 839527 kern.notice] df: unix: [ID 753105 kern.notice] #pf Page fault unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x0 unix: [ID 243837 kern.notice] pid=3965, pc=0xfffffffffb893ff8, sp=0xfffffffc816fdc88, eflags=0x10206 unix: [ID 211416 kern.notice] cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de> unix: [ID 624947 kern.notice] cr2: 0 unix: [ID 625075 kern.notice] cr3: 59f0a2000 unix: [ID 625715 kern.notice] cr8: c unix: [ID 100000 kern.notice] unix: [ID 592667 kern.notice] rdi: fffffffc816fdd48 rsi: 0 rdx: f unix: [ID 592667 kern.notice] rcx: 1 r8: e80 r9: 0 unix: [ID 592667 kern.notice] rax: fffffffc816fdd48 rbx: fefa3430 rbp: fffffffc816fdca0 unix: [ID 592667 kern.notice] r10: fffffffffb856d00 r11: 0 r12: fffffffc816fdd00 unix: [ID 592667 kern.notice] r13: ffffc10012176880 r14: 0 r15: ffffc1002bb09480 unix: [ID 592667 kern.notice] fsb: 0 gsb: ffffc1000eac8000 ds: 4b unix: [ID 592667 kern.notice] es: 4b fs: 0 gs: 1c3 unix: [ID 592667 kern.notice] trp: e err: 0 rip: fffffffffb893ff8 unix: [ID 592667 kern.notice] cs: 30 rfl: 10206 rsp: fffffffc816fdc88 unix: [ID 266532 kern.notice] ss: 38 In preced log of panic I see "odm:odmstatvfs+90". I think this is root of panic, but due in lack of scat and mdb knowlage, I am cannot to investigate this module. When I delete VXVM, there is no panic when I issue the 'df'. If I can provide more information about this case, please let me know. For now I dont know what additional info to provide. Core dump is about of 400 MB, which is more than I can attach to this message.Solved4KViews1like13CommentsHow much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentVxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group
Hi, I have a 2 node netbackup cluster(VCS). Earlier today I migrated a volume from an old storage array to a new storage array. How I did it is: 1. Present new Disk into the hosts 2. Scan for new disk on OS level 3. Scan for new disk on Veritas 4. Used the vxdiskadm utilitity to initialized the new disk 5. Added the new disk into the DiskGroup 6. Mirrored the volume to the new disk 7. After synchronization had completed, I removed the old plex from the disk group All of the above steps were done on the active node(NODE1), now when I try to failover the cluster resources to the inactive node(NODE2) i get the below error: VxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group Then the cluster fails back again to the original node(NODE1) bash-3.2# vxdisk -o alldgs list (Active node) DEVICE TYPE DISK GROUP STATUS disk_0 auto:ZFS - - ZFS emc_clariion0_0 auto:cdsdisk - - online(Old disk) emc_clariion0_1 auto:cdsdisk nbu_dg02 nbu_dg online (New disk) ========================================================= bash-3.2# vxdisk -o alldgs list (Inactive node) DEVICE TYPE DISK GROUP STATUS disk_0 auto:ZFS - - ZFS emc_clariion0_0 auto:cdsdisk - - online (Old Disk) From the above output, I can't see the new disk which is supposed to show up in the inactive node with the disk group in a deported state. Please assist Regards,Solved2.6KViews0likes2CommentsEncapsulation issue on VxVM6.1
Hello, We are running Solaris X 86 and added to virtual disk and trying to mirror the root disk, but getting this error message. Continue with encapsulation? [y,n,q,?] (default: y) A new disk group rootdg will be created and the disk device c0t1d0 will be encapsulated and added to the disk group with the disk name rootdg01. VxVM ERROR V-5-2-5711 The encapsulation operation failed with the following error: VxVM vxencap ERROR V-5-2-310 The c0t1d0 disk does not appear to be prepared for this system. Hit RETURN to continue. Encapsulate other disks? [y,n,q,?] (default: n)1.4KViews0likes1Commentvolume set expand process
I have a volume set and i wanted to expand that. There are 2 volumes in the volume set. I expanded first volume by vxassist -g growby 2048m and then i found the volume set mount not expanded when i did df -kh. Please suggest how to expand a volume set in vxvm.Solved1.1KViews0likes2CommentsPost 6.1 upgrade: Solaris 10 servers periodically drop and re-add physical disks
Hi, I'm seeing an interesting issue. I have a handful of Solaris 10 servers (out of about 100) that are periodically dropping a disk and adding it back into the volume manager configuration after upgrading them to Storage Foundation/Infoscale 6.1 the following sequence happens reports a disk fault,reports an issue with afailed sd & plex, reports that it is attempting to relocate the subdisk & plex, can't find a place, so thendrops the subdisk & plex, then 4 or 5 minutes later, it finds the disk and relocates the failed sd and plex on the original disk. This seems to happen every month or so for no apparent reason. The hard disks aren't having hard faults. Has anyone else experienced this behavior? Or know what can be done to make 6.1 a bit more fault tolerant to hard disk errors? Thanks, Bob651Views0likes1CommentHow to remove a Quick i/o file on solaris?
Hi All , we have a running solaris and veritas quick i/o files for oracle 9i database . The files no longer exist in the database but somehow exist in the file system . How do we remove the veritas files from the filesystem ? lrwxrwxrwx 1 oracle oinstall 36 Sep 16 2011 b2bprddb_undotbs2.dbf03 -> .b2bprddb_undotbs2.dbf03::cdev:vxfs: -rw-r--r-- 1 oracle oinstall 2097160192 Oct 21 11:13 .b2bprddb_undotbs2.dbf03 lrwxrwxrwx 1 oracle oinstall 34 Sep 11 2011 javazone_data01.dbf09 -> .javazone_data01.dbf09::cdev:vxfs: -rw-r--r-- 1 oracle oinstall 2097160192 Sep 11 2011 .javazone_data01.dbf09 lrwxrwxrwx 1 oracle oinstall 35 Dec 7 2010 javazone_index01.dbf06 -> .javazone_index01.dbf06::cdev:vxfs: -rw-r--r-- 1 oracle oinstall 1572872192 Dec 7 2010 .javazone_index01.dbf06Solved1.5KViews0likes2CommentsSolaris10 vx one disk failed and second is removed
Hello All, is it posible to save my data and start up from at least one good disk on a Solaris server in this case?: [root@host1:/]$ vxdisk list DEVICE TYPE DISK GROUP STATUS c1t0d0s2 auto:sliced rootdg01 rootdg online failing c1t1d0s2 auto:sliced rootdg03 rootdg online - - rootdg02 rootdg removed was:c1t1d0s2 [root@host1:/]$ I somehow started up the server from ramdisk. I'm in readonly mode, but I can manually mount FS if needed. The problem is thatc1t0d0s2 failed: WARNING: /pci@1c,600000/scsi@2 (glm0): Resetting scsi bus, got incorrect phase from (0,0) WARNING: /pci@1c,600000/scsi@2 (glm0): got SCSI bus reset WARNING: /pci@1c,600000/scsi@2 (glm0): Resetting scsi bus, got incorrect phase from (0,0) WARNING: /pci@1c,600000/scsi@2 (glm0): got SCSI bus reset WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0): auto request sense failed (reason=reset) and the second disk c1t1d0s2 is in a strange state which I don't understand - it's part ofrootdg02 which is ok and also partofrootdg03 which is removed.I can't mount this disk c1t1d0s0,1,3,4,... [root@host1:/]$ mount /dev/dsk/c1t1d0s1 /mnt mount: I/O error mount: Cannot mount /dev/dsk/c1t1d0s1 [root@host1:/]$ Thank you for your comments.1.4KViews0likes1Comment