deleting rlink that's having "secondary_config_err" flag
hello, in my VCS global cluster my ORAGrp resource group is partially online since my rvgres is offline, i am suspecting the issue in the below rlink. i am trying to dissociate thi rlink(below:rlk_sys1-DB-rep_DB_r) and dettach it in order to delete it but i am not able to succeed. below are some output from the system. root@sys2# vxprint -P Disk group: DBhrDG TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl rlk_sys1-DB-rep_DB_r DB_rvg CONNECT - - ACTIVE - - rl rlk_sys1-rep_DB-rvg DB-rvg ENABLED - - PAUSE - - root@sys2# vxrlink -g DBhrDG dis rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-3520 Rlink rlk_sys1-rep_DB-rvg can not be dissociated if it is attached root@sys2# vxrlink -g DBhrDG det rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-10128 Operation not allowed with attached rlinks root@sys2# vxedit -g DBhrDG rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3540 Rlink rlk_sys1-rep_DB-rvg is not disabled, use -f flag root@sys2# vxedit -g DBhrDG -f rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3541 Rlink rlk_sys1-rep_DB-rvg is not dissociated root@sys2# vxprint -Vl Disk group: DBhrDG Rvg: DB-rvg info: rid=0.1317 version=0 rvg_version=41 last_tag=11 state: state=CLEAN kernel=DISABLED assoc: datavols=(none) srl=(none) rlinks=rlk_sys1-rep_DB-rvg exports=(none) vsets=(none) att: rlinks=rlk_sys1-rep_DB-rvg flags: closed secondary disabled detached passthru logging device: minor=26012 bdev=343/26012 cdev=343/26012 path=/dev/vx/dsk/DBhrDG/DB-rvg perms: user=root group=root mode=0600 Rvg: DB_rvg info: rid=0.1386 version=13 rvg_version=41 last_tag=12 state: state=ACTIVE kernel=ENABLED assoc: datavols=sys1_DB_Process,sys1_DB_Script,... srl=sys1_DB_SRL rlinks=rlk_sys1-DB-rep_DB_r exports=(none) vsets=(none) att: rlinks=rlk_sys1-DB-rep_DB_r flags: closed secondary enabled attached logging device: minor=26014 bdev=343/26014 cdev=343/26014 path=/dev/vx/dsk/DBhrDG/DB_rvg perms: user=root group=root mode=0600 please advise regards1.5KViews0likes4CommentsHow to extend Disk in VXVM
HI, I have 1 Lun with size 120GB, on that LUN we have 1 Disk group with 4 DB volume, now i want to extend the size of "DISK" not DG nor Volume.Can i extend Disk size from 120 G B to 200 GB??? If yes, thn please share POA also... Thanks in Advance...Solved2.2KViews0likes5CommentsCFS umount abnormally
Hi everyone! I have ten SFCFS nodes on the same version of AIX OS platform with SFRAC 5.1SP1RP3. This SFCFS cluster has two CFS resources which running on all nodes.Recently, I encounter a very strange problem for service group sg-cfs-mount01 on a specific node.(DB2) The resource cfsmount1 related to service group sg-cfs-mount01 was offlined abnormally. I have checked in engine_A.log and there are shown I/O test failure by CVMVolDg agent.Then I also checked SYS logs, dmpevent.log .. etc. but there are no related errors.This problem only occurs on node DB2 with SG sg-cfs-mount01,but the SG sg-cfs-mount01 on other nodes and SG sg-cfs-mount02 on all nodes (include DB2) is working normally.(This is what I think strange for) I attachedengine_A.log CFSMount_A.logCVMVolDg_A.log etc.. in logs.tar Thanks in advance.899Views0likes3CommentsDependence between groups of services automatically
Hello we have two Linux and am using Veritas 5.1, with several groups of services and I am trying to set up a dependency between these groups. Already created the link between these groups and would just like to take a course. The VCS has the option of going by starting the service groups dependent child of the parent group? Ex: I have groups of services: Filesystem Mount_filesystem, Mount_samba. When I ask the Filesystem get online at a given node, the VCS would automatically place the group and Mount_filesystem Mount_samba? without my manual intervention?Solved715Views0likes2CommentsIs volume relayout command possible ?
Hello. I needyour help . I wonder if relayout command will be available for work. ( Stripe layout, ncol=14 to stripe layout, ncol=7 ) I will use the following command.: (volume relayout ncol 14 -> ncol 7 ) # vxassist -g ccrmapvg01 relayout lvol01 layout=stripe ncol=7 Is it possible? If you need anything let me know. environment information : OS: HP-UX 11.31 SFCFS version : SFCFS 5.0RP6 ============== vxdg list ============ NAME STATE ID ccrmapvg11 enabled,shared,cds 1139668239.157.ccrmap1p ============== vxprint ============ dg ccrmapvg11 default default 49000 1139668239.157.ccrmap1p dm c35t0d4 c38t0d4 auto 1024 47589888 - dm c35t0d5 c38t0d5 auto 1024 47589888 - dm c35t0d6 c38t0d6 auto 1024 47589888 - dm c35t0d7 c38t0d7 auto 1024 47589888 - dm c35t1d0 c38t1d0 auto 1024 47589888 - dm c35t1d1 c38t1d1 auto 1024 47589888 - dm c35t1d2 c38t1d2 auto 1024 47589888 - dm c35t1d3 c38t1d3 auto 1024 47589888 - dm c35t1d4 c38t1d4 auto 1024 47589888 - dm c35t1d5 c38t1d5 auto 1024 47589888 - dm c35t1d6 c38t1d6 auto 1024 47589888 - dm c35t1d7 c38t1d7 auto 1024 47589888 - dm c35t2d0 c38t2d0 auto 1024 47589888 - dm c35t2d1 c38t2d1 auto 1024 47589888 - dm c35t2d2 c38t2d2 auto 1024 47589888 - dm c35t2d3 c38t2d3 auto 1024 47589888 - dm c35t2d4 c38t2d4 auto 1024 47589888 - dm c35t2d5 c38t2d5 auto 1024 47589888 - dm c35t2d6 c38t2d6 auto 1024 47589888 - dm c35t2d7 c38t2d7 auto 1024 47589888 - dm c35t3d0 c38t3d0 auto 1024 47589888 - v lvol01 - ENABLED ACTIVE 666257408 SELECT lvol01-01 fsgen pl lvol01-01 lvol01 ENABLED ACTIVE 666257536 STRIPE 14/64 RW sd c35t0d4-01 lvol01-01 c35t0d4 0 47589824 0/0 c38t0d4 ENA sd c35t0d5-01 lvol01-01 c35t0d5 0 47589824 1/0 c38t0d5 ENA sd c35t0d6-01 lvol01-01 c35t0d6 0 47589824 2/0 c38t0d6 ENA sd c35t0d7-01 lvol01-01 c35t0d7 0 47589824 3/0 c38t0d7 ENA sd c35t1d0-01 lvol01-01 c35t1d0 0 47589824 4/0 c38t1d0 ENA sd c35t1d1-01 lvol01-01 c35t1d1 0 47589824 5/0 c38t1d1 ENA sd c35t1d2-01 lvol01-01 c35t1d2 0 47589824 6/0 c38t1d2 ENA sd c35t1d3-01 lvol01-01 c35t1d3 0 47589824 7/0 c38t1d3 ENA sd c35t1d4-01 lvol01-01 c35t1d4 0 47589824 8/0 c38t1d4 ENA sd c35t1d5-01 lvol01-01 c35t1d5 0 47589824 9/0 c38t1d5 ENA sd c35t1d6-01 lvol01-01 c35t1d6 0 47589824 10/0 c38t1d6 ENA sd c35t1d7-01 lvol01-01 c35t1d7 0 47589824 11/0 c38t1d7 ENA sd c35t2d0-01 lvol01-01 c35t2d0 0 47589824 12/0 c38t2d0 ENA sd c35t2d1-01 lvol01-01 c35t2d1 0 47589824 13/0 c38t2d1 ENA ============== vxdg free ============ DISK DEVICE TAG OFFSET LENGTH FLAGS c35t0d4 c38t0d4 c38t0d4 47589824 64 - c35t0d5 c38t0d5 c38t0d5 47589824 64 - c35t0d6 c38t0d6 c38t0d6 47589824 64 - c35t0d7 c38t0d7 c38t0d7 47589824 64 - c35t1d0 c38t1d0 c38t1d0 47589824 64 - c35t1d1 c38t1d1 c38t1d1 47589824 64 - c35t1d2 c38t1d2 c38t1d2 47589824 64 - c35t1d3 c38t1d3 c38t1d3 47589824 64 - c35t1d4 c38t1d4 c38t1d4 47589824 64 - c35t1d5 c38t1d5 c38t1d5 47589824 64 - c35t1d6 c38t1d6 c38t1d6 47589824 64 - c35t1d7 c38t1d7 c38t1d7 47589824 64 - c35t2d0 c38t2d0 c38t2d0 47589824 64 - c35t2d1 c38t2d1 c38t2d1 47589824 64 - c35t2d2 c38t2d2 c38t2d2 0 47589888 - c35t2d3 c38t2d3 c38t2d3 0 47589888 - c35t2d4 c38t2d4 c38t2d4 0 47589888 - c35t2d5 c38t2d5 c38t2d5 0 47589888 - c35t2d6 c38t2d6 c38t2d6 0 47589888 - c35t2d7 c38t2d7 c38t2d7 0 47589888 - c35t3d0 c38t3d0 c38t3d0 0 47589888 - ============== bdf2 ============ dev/vx/dsk/ccrmapvg11/lvol01 666257408 554988262 104314825 84% /nbsftp4Solved811Views3likes4CommentsCFSumount hangs on RHEL6.1 with veritas 5.1SP1PR2
Hello, I'm using two RHEL6.1 servers with veritas 5.1SP1PR2 cluster file system. Cluster starts ok and I can switch services to other node etc but when I execute commands hastop -local or hastop -all then the cluster hangs. It even makes one of the nodes to hang totally and no commands cannot be executed as root. What I can see is that it hangs when it does the cfsumount for shared mount points. Some of them (changes timeto time so not the same mount points) are hanging and appead again to the mount list but not as real mounts but as copies of root /. example below -bash-4.1# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8256952 5710140 2127384 73% / tmpfs 18503412 424 18502988 1% /dev/shm /dev/sda1 253871 37102 203662 16% /boot /dev/mapper/VolGroup00-LogVol04 12385456 863508 10892804 8% /opt /dev/mapper/VolGroup00-LogVol02 8256952 428260 7409264 6% /tmp /dev/mapper/VolGroup00-LogVol03 12385456 5429820 6326492 47% /var tmpfs 4 0 4 0% /dev/vx /dev/vx/dsk/mountdg/mymount 8256952 5710140 2127384 73% /mymount From here you can see that the data values are the same between my cfsmount (mymount) and root /. I can survive from this using command umount/mymount It throws an error umount: /mymount: not mounted but still ater this cluster continues to go down. This is justa workaround and I do not want to leave it like this. Any ideas how to fix this? Is there a patch for this or should I change something on RHEL or in veritas? br, JP434Views0likes1CommentSF CFS for RAC without GCO option
Hi Everyone, I have a cluster setup where I have a 2 node primary SF CFS for RAC cluster (for RHEL) and a single node as DR for the same. I am suing VVR to replicate the volumes accross to DR. I do not have the GCO option license. My question is, what should be the recommended VCS resources to be added to the configuration if I am not using GCO? I have attached my main.cf file that shows my current config. Any suggstions would be appreciated. Thank you. WBRSolved990Views0likes8CommentsVeritas Storage Foundation Cluster File System
I have 2 linux node on that i need to install Veritas Storage Foundation Cluster File System and after that created the share file system /data1 should be on shared storage and be available on two cluster nodes simultaneously as shared file systems. it is possibel ? if it possible please send me the configuration doc aur any useful link so i can configured the same on my serverSolved1.3KViews0likes5Comments