VCS new volume created. Need to add up under existing Resource Group under 2 node clustered Solaris 10 Server...Important..please assist
Hi, I am newbi to VCS. OS is Solaris10. Two node cluster env. New filesystem already created and mounted with dedicated Vertual IP and is already under VCS. We have added two more Vol's (Vol03 & Vol04) in the existing Resource Group say 'node1_rg' Guys, please help me out to know the sequential and step by step procedure with commands to take the newly created Volumes under existing Resource Group......and how to carry out the Failover Test. this is a test server on which we need to do the task. Oracle Database is also there and we can coordinate with App & DBA team to do this activity...... Pleaseeeeeeee help. this is first time I had been to this forum seeking the help... Thanks, Rinku, Pune- IndiaSolved3.3KViews2likes9Commentsdetaching vmdk files on vmware vm
When the application failover happens in VMware Guest enviorments, the VCS is responsible for failing over the application to other vm/vcs node on diffrent ESX host. In a scenario where the ESX/ESXi host itself faults, the VCS agents begin to fail over the application to the failover target system that resides on another host. The VMwareDisks agent communicates with the new ESX/ESXi host and initiates a disk detach operation on the faulted virtual machine. The agent then attaches the disk to the new failover target virtual machine. In this senario, how are the stale i/o from failing over guest/ESX host avoided? Are we on the mercy of VMware to take care of it? With SCSI3 PR this was the main problem that was solved. Moreover in such senario's even a garceful online detach wouldnt have gone through. I didnt find any references on VMware discussions forums as well. My customer wants to know about it, before he can deploy the application. Thanks, Raf3KViews0likes7Commentsbulk transfer?? block size??
About bulk transfer with secondary logging To effectively use network bandwidth for replication, data is replicated to a disaster recovery (DR) site in bulk at 256 KB. This bulk data transfer reduces VVR CPU overhead and increases the overall replication throughput. With compression enabled, bulk data transfer improves the compression ratio and reduces the primary side CPU usage. question 1.Smaller than 256k did before? 2.Did you send compressed 256k?Solved2.9KViews0likes1CommentStorage foundation linux - increase lun
Hi All, I am new on storage foundation solution, and also on the company. I have a oracle database running storage foundation and i just need increase a database lun. Do i need to do something on storage foundation configuration? or can i just increase the size by storage side and reboot my linux? The luns is one of resources that storage foundation manage. Thank youSolved2.2KViews0likes3CommentsVeritas InfoScale Operations Manager 7.0: Viewing the impact of enabling SmartIO
Veritas InfoScale Operations Manager 7.0 enables you to viewthe impact of enabling SmartIO in an interactive graphical format. Using this graph you can see the performance difference in a single view. You can view the following information for a host in a chart: This graph Displays Volumes - Total Bytes Read Number of bytes read from the SmartIO enabled volumes for the specified duration. Disks - Total Bytes Read Number of bytes read from the disks of the SmartIO enabled volumes for the specified duration. You can view the following information for an application in a chart: This graph Displays Volumes - Total Bytes Read Number of bytes read from the SmartIO enabled volumes for the specified duration. Disks - Total Bytes Read Number of bytes read from the disks of the SmartIO enabled volumes for the specified duration. Volumes - Average Read Latency Average read latency for the SmartIO enabled. For more information on SmartIO impactsee the following topic: Viewing the SmartIO Impact analysis chart Storage Foundation and High Availability and Veritas Operations Manager documentation for other releases and platforms can be found on theSORT website.2.2KViews0likes0CommentsThe behavior of CVM/VxDMP
Hi, In the Release Notes of SF5.1, there is a statement "When all of the Primary paths fail or are disabled in a non-Active/Active array in a CVM cluster, the cluster-wide failover is triggered. All hosts in the cluster start using the Secondary path to the array." What do you think "all of the Primary paths" means? It means "all of the Primary paths of the master node" or means "all of the Primary paths of all nodes"? Below is the Release Notes: http://sfdoccentral.symantec.com/sf/5.1/solaris/html/sfcfs_notes/ch01s06s02s01.htm Thanks, Yu jun