DRL not working on mirrored volumes in VVR - RVG (Mirrored volume doing full resync of plexes)
I think I am hitting a major issue here with a mirrored volume in RVG. SRL is supposed to provide the DRL functionality . Hence DRL logging is explicitly disabled when a volume is added to RVG. However my testing shows that DRL is not working and in the case of a mirror plex out of sync due to a server crash etc, full resync of mirror plexes is happening. (not just the dirty regions). Here is a quick and easy way to recreate the issue: My configuration: Infoscale 8 Redhat 8.7 I have a mirrored volume sourcevol2 (2 plexes) which I created like below: #vxassist -g dg1 make sourcevol2 1g logtype=dco drl=on dcoversion=20ndcomirror=1 regionsz=256 init=active #vxassist -b -g dg1 mirror sourcevol2 I wait for the synchronization to complete #/opt/VRTS/bin/mkfs -t vxfs -o nomaxlink /dev/vx/rdsk/dg1/sourcevol2 # mount /dev/vx/dsk/dg1/sourcevol2 /sourcevol2 I create SRL as below: #vxassist -g dg1 make dg1_srl 1g layout=concat init=active I create primary rvg as below: #vradmin -g dg1 createpri dg1_rvg sourcevol2 dg1_srl Verified dcm in dco flag is on. #vxprint -g dg1 -VPl dg1_rvg |grep flag flags: closed primary enabled attached bulktransfer dcm_in_dco Added secondary #vradmin -g dg1 addsec dg1_rvg primarynode1 primarynode2 Started initial replication #vradmin -g dg1 -a startrep dg11_rvg primarynode2 Verified replication is uptodate #vxrlink -g dg1 -T status rlk_primarynode2_dg1_rvg VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_primarynode2_dg1_rvg is up to date Here is the actual scenario to simulate mirror plexes out of sync : On primary: Run a DD command to put some IO on sourcevol2 #dd if=/dev/zero of=/sourcevol2/8krandomreads.0.0 bs=512 count=1000 oflag=direct In another terminal , force stop the sourcevol2 while dd is going on. #vxvol -g dg1 -f stop sourcevol2 #umount /sourcevol2 Start the sourcevol2 #vxvol -g dg1 start sourcevol2 #vxtask -g dg1 list -l Task: 160 RUNNING Type: RDWRBACK Operation: VOLSTART Vol sourcevol2 Dg dg1 Even though I only changed only few regions on the sourcevol2 (sequential writes of 512b), the volume goes through a full plex resync (as indicated by the time to start the volume). Summary: DRL on a Volume added to an RVG is not working . Hence mirrored volumes are going through a full plex resync as opposed to only resync of dirty regions.1.7KViews0likes5CommentsDoes Infoscale Storage (VVR) support cascaded space-optimized snapshot?
Configuration: Infoscale Storage 8.0 on Linux Infoscale storage foundation supports cascaded snapshot using vxsnap infrontof= to do cascaded snapshots Infoscale storage (with Volume replicator) documentation doesn't describe cascaded snapshot. I checked manpage for vxrvg. Does not have an infront of attribute. Does that mean cascaded space-optimized snapshots are not supported/permitted on RVG?1.2KViews0likes2CommentsDoes VVR support cascaded space-optimized snapshot?
Configuration: Infoscale Storage 8.0 on Linux Infoscale storage foundation supports cascaded snapshot using vxsnap infrontof= to do cascaded snapshots Infoscale storage (with Volume replicator) documentation doesn't describe cascaded snapshot. I checked manpage for vxrvg. Does not have an infront of attribute. Does that mean cascaded space-optimized snapshots are not supported/permitted on RVG?728Views0likes0CommentsMaxuproc not get updated even after reboot
Hi, Got a update to change the "maxuproc for wt82369 by 1.5 times" , While verifying we make necessary modification on the Global (wt81958). Normally there is a relation between max_nprocs value and maxuproc value. FYI.. maxuprc = max_nprocs – reserved_procs (default is 5) In this case we modified the max_nprocs value from 30000 to 50000 FYI.. [root@wt81958 GLOBAL] /etc # cat /etc/system | grep max_nprocs set max_nprocs=50000 After the global zone reboot the value is not updated while we hit sysdef [root@wt81958 GLOBAL] /root # sysdef | grep processes 30000 maximum number of processes (v.v_proc) 29995 maximum processes per user id (v.v_maxup) Can anyone please assist us if any thing we missed in this to make the necessary changes to replicate. Awaiting for your valuable suggestions. Thanks, senthilsamSolved3.2KViews0likes3CommentsAdding New Node Veritas Cluster Server with different hardware specification
Dear Experts, I need your suggestion on the below: Currently we have Two Node Veritas Cluster 6.2 running Windows 2008 R2 hosted on HPE DL380 G7 Servers. We are planning to refresh the hardware and want to move all workloads to new HPE DL380 G9/G10 Servers with Veritas Cluster 6.2 being deployed on Windows 2008 R2. It will only hardware refresh without any Application OR OS Upgrade. Currently Oracle 10gR2 is configured in Failover cluster mode. Application binaries are installed in C:\ drives on all cluster nodes. Would like to know whether I can deploy New VCS 6.2 node on New HPE DL380 G9/G10 Server and Add to existing cluster? If possible, what is the way around. OR this will not work? I tried to search articles, but no luck. Since the hardware architecture will be different, what will be the consequences when we do failover manually OR if we shutdown the Resource Group and start on Newly deployed server? Appreciate you feedback, answers, and any ideas with new approach. Thanks RaneSolved2.5KViews0likes5Commentsvxlicrep ERROR V-21-3-1015 Failed to prepare report for key
Dear all, we got a INFOSCALE FOUNDATION LNX 1 CORE ONPREMISE STANDARD PERPETUAL LICENSE CORPORATE. I have installed key using the vxlicinst -k <key> command. But when I want to check it using vxlicrep I'm getting this error for the given key: vxlicrep ERROR V-21-3-1015 Failed to prepare report for key = <key> We have Veritas Volume Manager 5.1 (VRTSvxvm-5.1.100.000-SP1_RHEL5 and VRTSvlic-3.02.51.010-0) running on RHEL 5.7 on 64 bits. I've read that the next step is to run vxkeyless set NONE, but I'm afraid to run this until I cannot see the license reported correctly by vxlicrep. What can I do to fix it? Thank you in advance. Kind regards, Laszlo4.2KViews0likes7CommentsUnable to initialize disk using vxdisksetup
I'm in the middle of setting up Storage Foundation for Oracle RAC 5.0 on RHEL 4 Update 3. I have Sun StorEdge 6920 as the storage array. Here's wat happened: - SFORAC documentation asked me to create the minimum possible LUN on the array for coordinator disks - I created the smallest LUN (16 MB) on the array - When trying to initialize it (using vxdisksetup -i Disk_0 format=cdsdisk) I got the error about disk being too small - I extended the disk size on the array to 50 MB - The servers still saw the coordinator disks (/dev/sda to /dev/sdc) of size 16 MB. Since I'm not well versed in Linux I rebooted both servers so they can see the new LUN size Fdisk is able to view the new LUN size & manipulate it as well however vxdisksetup still would not allow me to initialize the disks. I've tried the following: # fdisk /dev/sda Chose o & then w # vxdisksetup -i Disk_0 format=cdsdisk VxVM vxdisk ERROR V-5-1-535 Device Disk_0: Invalid attributes # vxdisksetup -i Disk_0 format=cdsdisk VxVM vxdisksetup ERROR V-5-2-0 Disk is too small for supplied parameters Then I zero filled the LUN: # dd if=/dev/zero of=/dev/sda bs=1M Repeated the steps above to get the same errors. I've also tried to use simple format but it doesnt work either. Have I missed out something or is it just that VxVM does not like volumes expanded by the array? Please help.4.4KViews0likes6CommentsDoubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentCannot remove last disk group configuration copy
Hi, I have a diskgroup with 6 EMC SAN disks in it. I got 6 new SAN storage and added it to the same disk group and mirrored volume. This host is running on centos 4.8. After mirroing I have removed the old plex. When I try to remove the last disk from the old SAN on the host using "vxdg -g dg rmdisk <disk_name> ", it throws an error as mentioned below. # vxdg -g dg01 rmdisk disk06 VxVM vxdg ERROR V-5-1-10127 disassociating disk-media disk06: Cannot remove last disk group configuration copy I would like to remove this last disk from the Disk group, because the volume is running on the new disks. How can I remove this disk from the disk group? Thanks for the help in advance.Solved4.8KViews0likes7Comments