How to verify VCS installation on a system
Symantec recommends that you verify your installation of Symantec Cluster Server (VCS) on a system before you install or upgrade VCS. This allows you to know about the product prerequisites, installed product version, and configuration. You can verify installation of VCS on a system using the following techniques: Operating System (OS) commands Script-based Installer Symantec Operations Readiness Tools (SORT) checks VCS command validation OS commands You can run native OS commands on the system to verify whether VCS is installed. The following table lists the commands to verify the VCS installation and the VCS version and patches installed on the system. Use cases AIX HP-UX Linux Solaris Verifying VCS installation lslpp -l VRTSvcs swlist VRTSvcs rpm –qi VRTSvcs For Solaris 10: pkginfo –l VRTSvcs For Solaris 11: pkg info VRTSvcs Verifying VCS version and patches lslpp -l VRTSvcs swlist VRTSvcs rpm –qi VRTSvcs showrev –p | grep VRTSvcs You can use these commands to verify which product packages are installed on the system.To get a complete list of required and optional packages for VCS, see the product release notes on theSORTwebsite. Note:On Linux, there is no sparse patch or patch ID. Therefore, the package version itself indicates the patch version of the installed VCS. Advantage of using the OS command technique By default, native commands are available on a system and can be used with ease. Limitations of using the OS command technique You must run OS commands as root on the cluster nodes. OS commands are useful for package and patch validation. However, these commands do not provide complete information about the VCS product installation. You need to run multiple commands to validate whether the required packages are installed on the system Script-based Installer Symantec recommends that you use the script-based installer to install Symantec products. The script-based installercan be used to identify which products from the Storage Foundation and High Availability (SFHA) family are installed on the system. The installer script can be executed to get a list of VCS packages and their versions installed on the system. These commands can be executed on AIX, HP-UX, Linux, and Solaris. The installer also allows you to configure the product, verify the pre-installation requisites, and view the description of the product. The following command provides the major version of the product and packages installed on the system. However, it does not provide details such as join version, build date, and patches installed on the other nodes in the cluster. To use this command, VCS must be already installed on the system. To use the script-based installer to verify the version of VCS installed on the system Run the following command: #/opt/VRTS/install/installvcs<version> –version Whereversionis the specific release version. For example, to validate the VCS 6.1 installation on the system, run the following command: #/opt/VRTS/install/installvcs61 –version To initiate the VCS installation validation using the product DVD media provided by Symantec, run the following installer script: #<dvd-media-path>/installer -version The installer script lists the Symantec products installed on the system along with the version details of the products. You can also use this script to perform a pre-check of the required package dependencies to install the product. If the product is already installed on the system and you want to validate the list of packages and patches along with their version, run the following command: #/opt/VRTS/install/showversion This command provides details of the product installed on all the nodes in a cluster. This information includes the product name, required and optional packages installed on the system, installed and available product updates, version, and product license key. Advantage of using script-based installer A single script validates all nodes in the cluster. Therefore, it does not need any platform-specific commands for performing validation. Limitation of using script-based installer The VRTSsfcpi package must be installed on the systems. Note: The VRTSsfcpi package was first released in VCS 6.0 and is available in the later versions. For earlier versions, use the installer from the DVD media. As an alternative, you can launch theinstaller from the DVD provided by Symantec, regardless of the product version. For more information about installing VCS using installer, seeInstalling VCS using the installer. SORT checks SORT provides a set of web-based tools to automate and simplify time-consuming administrator tasks. For example, the data collector tool gathers system-related information and generates web-based and text-based custom reports. These reports capture the system and platform-related configuration details and list the Symantec products installed on the system. SORT generates the following custom reports: Installation and Upgrade Risk Assessment License/Deployment You can generate and view custom reports to check which Symantec products are installed on a system. These reports list the passed and failed checks and other significant details you can use to assess the system. The checks and recommendations depend on the installed product. For SORT checks, see System Assessments. To generate a SORT custom report, On theData Collectortab, download the appropriate data collector for your environment. Follow the instructions in the README file to install the data collector. Run the data collector. It analyzes the nodes in the cluster and stores results in an XML file. On theUpload Reportstab, upload the XML file to the SORT website. SORT generates a custom report with recommendations and links to the related information. For more information about custom reports, visithttps://sort.symantec.com. Advantage of using the SORT checks SORT checks provide comprehensive information about the installed product. Limitation of using the SORT checks SORT data collector is not a part of product media and must be downloaded and installed on the system to generate reports. VCS command validation VCS provides a set of commands to validate and provide additional details of the components installed as a part of VCS product installation. For more information about verifying the VCS installation using VCS commands, seeSymantec™ Cluster Server 6.1 Administrator's Guide. The VCS command validation method allows you to check if VCS is correctly configured on the nodes in a cluster. To verify the status of the VCS components such as Low-Latency Transport (LLT), Group Membership Services/Atomic Broadcast (GAB), and the VCS engine, you can inspect the content of the key VCS configuration files or run the following VCS commands. Component Command Provides GAB #gabconfig -W GAB protocol version LLT #lltconfig -W LLT protocol version VCS engine #had -version HAD engine version and join version Cluster #hasys -state Cluster state Advantages of using VCS commands VCS commands provide comprehensive information about the cluster. VCS commands can be used for configuring the cluster. Limitation of using VCS commands VCS commands can be used only after the VCS product is completely installed and configured on the system. Frequently asked questions The following is a list of VCS installation-related frequently asked questions: Where do I check the availability of the CPI installer on a system? The installer script is located at /opt/VRTS/install. Where are the CPI installation logs located? The installation logs are located at /opt/VRTS/install. Where do I find information about SORT checks and reports? For information about SORT checks and reports, visithttps://sort.symantec.com. How do I validate a system before installing VCS? Before you install VCS, you must make sure the system is ready. To validate the system, use the installer script on the Symantec DVD. To start the pre-installation validation on the system and verify whether the system meets the product installation requirements, run the following command: #installer –precheck14KViews0likes1CommentVCS Warning for Unknown State
Hi, I just curious why I received a Warning notification for Netlsnr Resource Group when the error is not logged into engine_A.log I have read the VCS documentation but the only hint I have is this. Resource state is unknown. Warning VCS cannot identify the state of the resource. Can anyone provide better explanation what could have caused VCS to send the warning email?Solved8.6KViews0likes17CommentsCannot unload GAB and LLT on RHEL 6.0
Hi all, i have next problem # lltstat -n LLT node information: Node State Links 0 srv-n1 OPEN 2 * 1 srv-n2 OPEN 2 # gabconfig -a GAB Port Memberships =============================================================== Port a gen 7b4d01 membership 01 Port b gen 7b4d05 membership 01 Port d gen 7b4d04 membership 01 Port h gen 7b4d11 membership 01 # /opt/VRTSvcs/bin/haconf -dump -makero VCS WARNING V-16-1-10369 Cluster not writable. # /opt/VRTSvcs/bin/hastop -all -force # /etc/init.d/vxfen stop Stopping vxfen.. Stopping vxfen.. Done # /etc/init.d/gab stop Stopping GAB: ERROR! Cannot unload GAB module. Clients still exist Kill/Stop clients corresponding to following ports. GAB Port Memberships =============================================================== Port d gen 7b4d04 membership 01 # /etc/init.d/llt stop Stopping LLT: LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [1] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [2] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [3] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [4] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [5] LLT:Error: lltconfig failed OK, i see lsmod and modinfo for details... # lsmod Module Size Used by vxodm 206291 1 vxgms 284352 0 vxglm 289848 0 gab 283317 4 llt 180985 5 gab autofs4 27683 3 sunrpc 241630 1 dmpCLARiiON 11771 1 dmpap 9390 1 vxspec 3174 6 vxio 3261814 1 vxspec vxdmp 377776 20 vxspec,vxio cpufreq_ondemand 10382 1 acpi_cpufreq 8593 3 freq_table 4847 2 cpufreq_ondemand,acpi_cpufreq ipv6 321209 60 vxportal 5940 0 fdd 53457 1 vxodm vxfs 2957815 2 vxportal,fdd exportfs 4202 0 serio_raw 4816 0 i2c_i801 11190 0 iTCO_wdt 11708 0 iTCO_vendor_support 3022 1 iTCO_wdt ioatdma 57872 9 dca 7099 1 ioatdma i5k_amb 5039 0 hwmon 2464 1 i5k_amb i5000_edac 8833 0 edac_core 46055 3 i5000_edac sg 30186 0 shpchp 33448 0 e1000e 140051 0 ext4 353979 3 mbcache 7918 1 ext4 jbd2 89033 1 ext4 dm_mirror 14003 1 dm_region_hash 12200 1 dm_mirror dm_log 10088 3 dm_mirror,dm_region_hash sr_mod 16162 0 cdrom 39769 1 sr_mod sd_mod 37221 18 crc_t10dif 1507 1 sd_mod pata_acpi 3667 0 ata_generic 3611 0 ata_piix 22588 0 ahci 39105 4 qla2xxx 280129 24 scsi_transport_fc 50893 1 qla2xxx scsi_tgt 12107 1 scsi_transport_fc radeon 797054 1 ttm 46942 1 radeon drm_kms_helper 32113 1 radeon drm 200778 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5664 1 radeon i2c_core 31274 5 i2c_i801,radeon,drm_kms_helper,drm,i2c_algo_bit dm_mod 76856 20 dm_mirror,dm_log # rmmod gab ERROR: Module gab is in use [root@srv-vrts-n2 ~]# modinfo gab filename: /lib/modules/2.6.32-71.el6.x86_64/veritas/vcs/gab.ko license: Proprietary. Send bug reports to support@veritas.com description: Group Membership and Atomic Broadcast 5.1.120.000-SP1PR2 author: VERITAS Software Corp. srcversion: F43C75576C05662FB0ED8C8 depends: llt vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: gab_logflag:int parm: gab_numnids:maximum nodes in the cluster (1-128) (int) parm: gab_numports:maximum gab ports allowed (1-32) (int) parm: gab_flowctrl:queue depth that causes flow-control (1-128) (int) parm: gab_logbufsize:internal log buffer size in bytes (8100-65400) (int) parm: gab_msglogsize:maximum messages in internal message log (128-4096) (int) parm: gab_isolate_time:maximum time to wait for isolated client (16000-240000) (int) parm: gab_kill_ntries:number of times to attempt to kill client (3-10) (int) parm: gab_kstat_size:Number of system statistics to maintain in GAB 60-240 (int) parm: gab_conn_wait:maximum number of wait for CONNECTS message (1-256) (int) parm: gab_ibuf_count:maximum number of intermediate buffers (0-32) (int) # modinfo llt filename: /lib/modules/2.6.32-71.el6.x86_64/veritas/vcs/llt.ko license: Proprietary. Send bug reports to support@veritas.com author: VERITAS Software Corp. description: Low Latency Transport 5.1.120.000-SP1PR2 srcversion: AF11D9C04A71073E1ADCFC8 depends: vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: llt_maxnids:maximum nodes in the cluster (1-128) (int) parm: llt_maxports:maximum llt ports allowed (1-32) (int) parm: llt_nqthread:number of kernel threads to use (2-5) (int) parm: llt_basetimer:frequency of base timer ((10 * 1000)-(500 * 1000)) (int) Hm.... ok, i run /etc/init.d/gab stop with debug ... + echo 'Stopping GAB: ' Stopping GAB: + mod_isloaded ++ lsmod ++ grep '^gab\ ' + return + mod_isconfigured ++ LANG=C ++ LC_ALL=C ++ /sbin/gabconfig -l ++ grep 'Driver state' ++ grep -q Configured + return + /sbin/gabconfig -U + ret=1 + '[' '!' 1 -eq 0 ']' + echo 'ERROR! Cannot unload GAB module. Clients still exist' ERROR! Cannot unload GAB module. Clients still exist + echo 'Kill/Stop clients corresponding to following ports.' Kill/Stop clients corresponding to following ports. + LANG=C + LC_ALL=C + /sbin/gabconfig -a + grep -v 'Port a gen' GAB Port Memberships ... ok, i use /sbin/gabconfig -l and -U # /sbin/gabconfig -l GAB Driver Configuration Driver state : Configured Partition arbitration: Disabled Control port seed : Enabled Halt on process death: Disabled Missed heartbeat halt: Disabled Halt on rejoin : Disabled Keep on killing : Disabled Quorum flag : Disabled Restart : Enabled Node count : 2 Send queue limit : 128 Recv queue limit : 128 IOFENCE timeout (ms) : 15000 Stable timeout (ms) : 5000 # /sbin/gabconfig -U GAB /sbin/gabconfig ERROR V-15-2-25014 clients still registered but it did not help... i find this topic https://www-secure.symantec.com/connect/forums/unable-stop-gab-llt-vcs51solaris-10 and stop ODM # /etc/init.d/vxodm stop Stopping ODM and run /etc/init.d/gab stop , but # /etc/init.d/gab stop Stopping GAB: GAB has usage count greater than zero. Cannot unload I again see /sbin/gabconfig -l # /sbin/gabconfig -l GAB Driver Configuration Driver state : Unconfigured Partition arbitration: Disabled Control port seed : Disabled Halt on process death: Disabled Missed heartbeat halt: Disabled Halt on rejoin : Disabled Keep on killing : Disabled Quorum flag : Disabled Restart : Disabled Node count : 0 Send queue limit : 128 Recv queue limit : 128 IOFENCE timeout (ms) : 15000 Stable timeout (ms) : 5000 [root@srv-vrts-n2 ~]# /sbin/gabconfig -a GAB Port Memberships =============================================================== and again run with debug /etc/init.d/gab stop ... + echo 'Stopping GAB: ' Stopping GAB: + mod_isloaded ++ lsmod ++ grep '^gab\ ' + return + mod_isconfigured ++ LANG=C ++ LC_ALL=C ++ /sbin/gabconfig -l ++ grep 'Driver state' ++ grep -q Configured + return + mod_unload ++ lsmod ++ grep '^gab ' ++ awk '{print $3}' + USECNT=1 + '[' -z 1 ']' + '[' 5 '!=' 0 ']' + ps -e + grep gablogd + '[' 1 -ne 0 ']' + GAB_UNLOAD_RETRIES=0 + '[' 0 '!=' 0 ']' ++ lsmod ++ grep '^gab ' ++ awk '{print $3}' + USECNT=1 + '[' 1 -gt 0 ']' + echo 'GAB has usage count greater than zero. Cannot unload' GAB has usage count greater than zero. Cannot unload + return 1 ... and agian run lsmod lsmod | grep gab gab 283317 1 llt 180985 1 gab [root@srv-vrts-n2 ~]# rmmod gab ERROR: Module gab is in use what can be done in such a situation?Solved7.8KViews1like4Commentscannot configure vxfen after reboot
Hello, We move physically a server, and after reboot, we cannot configure vxfen. # vxfenconfig -c VXFEN vxfenconfig ERROR V-11-2-1002 Open failed for device: /dev/vxfen with error 2 my vxfen.log : Wed Aug 19 13:17:09 CEST 2015 Invoked vxfen. Starting Wed Aug 19 13:17:23 CEST 2015 return value from above operation is 1 Wed Aug 19 13:17:23 CEST 2015 output was VXFEN vxfenconfig ERROR V-11-2-1041 Snapshot for this node is different from that of the running cluster. Log Buffer: 0xffffffffa0c928a0 VXFEN vxfenconfig NOTICE Driver will use customized fencing - mechanism cps Wed Aug 19 13:17:23 CEST 2015 exiting with 1 Engine version 6.0.10.0 RHEL 6.3 any idea to help me running the vxfen (and the had after ... ) ?6KViews0likes7Commentsvxdisk list showing errors on multiple disks, and I am unable to start cluster on slave node.
Hello, If anybody have same experience and can help me, I am gonna be very thankful I am using solars 10 (x86141445-09) + EMC PowerPath (5.5.P01_b002) + vxvm (5.0,REV=04.15.2007.12.15) on two node cluster. This is fileserver cluster. I've added couple new LUNs and when I try to scan for new disk :"vxdisk scandisks" command hangs and after that time I was unable to do any vxvm job on that node, everytime command hangs. I've rebooted server in maintanance windows, (before reboot switched all SGs on 2nd node) After that reboot I am unable to join to cluster with reason 2014/04/13 01:04:48 VCS WARNING V-16-10001-1002 (filesvr1) CVMCluster:cvm_clus:online:CVMCluster start failed on this node. 2014/04/13 01:04:49 VCS INFO V-16-2-13001 (filesvr1) Resource(cvm_clus): Output of the completed operation (online) ERROR: 2014/04/13 01:04:49 VCS ERROR V-16-10001-1005 (filesvr1) CVMCluster:???:monitor:node - state: out of cluster reason: Cannot find disk on slave node: retry to add a node failed Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.warning] V-5-1-8222 slave: missing disk 1306358680.76.filesvr1 Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.warning] V-5-1-7830 cannot find disk 1306358680.76.filesvr1 Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.error] V-5-1-11092 cleanup_client: (Cannot find disk on slave node) 222 here is output from 2nd node (working fine) Disk: emcpower33s2 type: auto flags: online ready private autoconfig shared autoimport imported guid: {665c6838-1dd2-11b2-b1c1-00238b8a7c90} udid: DGC%5FVRAID%5FCKM00111001420%5F6006016066902C00915931414A86E011 site: - diskid: 1306358680.76.filesvr1 dgname: fileimgdg dgid: 1254302839.50.filesvr1 clusterid: filesvrvcs info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2 and here is from node where i see this problems Device: emcpower33s2 devicetag: emcpower33 type: auto flags: error private autoconfig pubpaths: block=/dev/vx/dmp/emcpower33s2 char=/dev/vx/rdmp/emcpower33s2 guid: {665c6838-1dd2-11b2-b1c1-00238b8a7c90} udid: DGC%5FVRAID%5FCKM00111001420%5F6006016066902C00915931414A86E011 site: - errno: Configuration request too large Multipathing information: numpaths: 1 emcpower33c state=enabled Can anybody help me? I am not sure aboutConfiguration request too largeSolved5.7KViews1like16CommentsIP agent for same mac address interface
Hi all, Our environment as the following: OS: redhat 6.5 VCS: VCS 6.2.1 Our server have two physical network port, namely eth0 and eth1. We do create tagged vlan, vlan515, vlan516, vlan518, vlan520 based on eth0 and eth1. We are able to create resource IP on vlan518 and failover between two nodes. However, when we create resource IP on vlan515, it is not able to bring it online. According to the link, https://support.symantec.com/en_US/article.TECH214469.html, It knows that duplicate mac address would cause the problem. However, it can't figure out where "MACAddress" attribute in VCS Java Console as mentioned in the solution. I did manually add "MACAddress" attribute on main.cf on either NIC or IP resource, it come with not support with haconf -verify command. Any hints or solution for the problem when configure the IP agent resource on same mac address? Thanks, XentarSolved5.4KViews0likes22CommentsHow to recover from failed attempt to switch to a different node in cluster
Hello everyone. I have a two node cluster to serve as an Oracle database server. I have the Oracle binaries installed on disks local to each of the nodes (so they are outside the control of the Cluster Manager). I have a diskgroup which is three 1TB LUNs from my SAN, six volumes on the diskgroup (u02 through u07), six mount points (/u02 through /u07), a database listener and the actual Oracle database. I was able to successfully manually bring up these individual components and confirmed that the database was up an running. I then tried a "Switch To" operation to see if everything would mode to the other node of the cluster. It turns out this was a bad idea. Within the Cluster Manager gui, the diskgroup has a state of Online, Istate of "Waiting to go offline propogate" and Flag of "Unable to offline". The volumes show as "Offline on all systems" but the mounts still show as online with "Status Unknown". When I try to take the mount points offline, I get the message "VCS ERROR V-16-1-10277 The Service Group i1025prd to which Resource Mnt_scratch belongs has failed or switch, online, and offline operations are prohibited." Can anyone tell me how I can fix this? KenSolved4.9KViews1like17CommentsVCS dependency question - Service group is not runninng on the intended node upon cluster startup.
Dear All, I have a questions regarding the service group dependency. I have two parent service group "Group1" and "Group2" . They are dependent on a child service group "ServerGroup1_DG" which has been configured as parallel service. In the main.cf file, I have configured the service group "Group1" to be started on Node1 and service group "Group2" to be started on Node2. However, I don't know why the cluster do not start the service group "ServerGroup1_DG" on Node2 before starting the service group "Group2". During the cluster startup, the cluster evaluated Node1 to be a target node for service group "Group2", and it went on to online the service group on Node1. This is not the wanted behaviour. I would like to have the service group "Group1" running on Node1 and service group "Group2" running on Node2 when the cluster starts up. Can anyone help to shed some light on how I can solve this problem? Thanks. p/s: The VCS version is 5.0 on solaris platform. main.cf:- group Group1 ( SystemList = { Node1 = 1, Node2 = 2 } AutoStartList = { Node1, Node2 } ) requires group ServerGroup1_DG online local firm group Group2 ( SystemList = { Node1 = 2, Node2 = 1 } AutoStartList = { Node2, Node1 } ) requires group ServerGroup1_DG online local firm group ServerGroup1_DG ( SystemList = { Node1 = 0, Node2 = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { Node1, Node2 } ) CFSMount cfsmount2 ( Critical = 0 MountPoint = "/var/opt/xxxx/ServerGroup1" BlockDevice = "/dev/vx/dsk/xxxxdg/vol01" MountOpt @Node1 = "cluster" MountOpt @Node2 = "cluster" NodeList = { Node1, Node2 } ) CVMVolDg cvmvoldg2 ( Critical = 0 CVMDiskGroup = xxxxdg CVMActivation @Node1 = sw CVMActivation @Node2 = sw ) requires group cvm online local firm cfsmount2 requires cvmvoldg2 Engine Log:- 2010/06/25 16:05:47 VCS NOTICE V-16-1-10447 Group ServerGroup1_DG is online on system Node1 2010/06/25 16:05:47 VCS WARNING V-16-1-50045 Initiating online of parent group Group3, PM will select the best node 2010/06/25 16:05:47 VCS WARNING V-16-1-50045 Initiating online of parent group Group2, PM will select the best node 2010/06/25 16:05:47 VCS WARNING V-16-1-50045 Initiating online of parent group Group1, PM will select the best node 2010/06/25 16:05:47 VCS INFO V-16-1-10493 Evaluating Node2 as potential target node for group Group3 2010/06/25 16:05:47 VCS INFO V-16-1-10163 Group dependency is not met if group Group3 goes online on system Node2 2010/06/25 16:05:47 VCS INFO V-16-1-10493 Evaluating Node1 as potential target node for group Group3 2010/06/25 16:05:47 VCS INFO V-16-1-10493 Evaluating Node2 as potential target node for group Group2 2010/06/25 16:05:47 VCS INFO V-16-1-10163 Group dependency is not met if group Group2 goes online on system Node2 2010/06/25 16:05:47 VCS INFO V-16-1-10493 Evaluating Node1 as potential target node for group Group2 2010/06/25 16:06:15 VCS NOTICE V-16-1-10447 Group ServerGroup1_DG is online on system Node2 Regards, RyanSolved4.7KViews0likes2CommentsReservation conflict on all LUNs of the cluster nodes.
Hi All, We have recently done the solaris live upgrade from solaris 9 to solaris 10 along with the maintenance patch live upgrade from 5.0MP1 to 5.0MP3.After making the alternate boot environment active we got the reservation conflict error for all the LUNs and also it createdmultipathing problem.below are the logs fromserver,I would be thankfull for any suggetions or recommandations. Aug 16 03:28:27 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:27 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:27 P0111CRMDB gab: [ID 316943 kern.notice] GAB INFO V-15-1-20036 Port w gen 103c72d membership 0 Aug 16 03:28:27 P0111CRMDB gab: [ID 674723 kern.notice] GAB INFO V-15-1-20038 Port w gen 103c72d k_jeopardy ;1 Aug 16 03:28:27 P0111CRMDB gab: [ID 513393 kern.notice] GAB INFO V-15-1-20040 Port w gen 103c72d visible ;1 Aug 16 03:28:27 P0111CRMDB gab: [ID 316943 kern.notice] GAB INFO V-15-1-20036 Port v gen 103c72b membership 0 Aug 16 03:28:27 P0111CRMDB gab: [ID 674723 kern.notice] GAB INFO V-15-1-20038 Port v gen 103c72b k_jeopardy ;1 Aug 16 03:28:27 P0111CRMDB gab: [ID 513393 kern.notice] GAB INFO V-15-1-20040 Port v gen 103c72b visible ;1 Aug 16 03:28:28 P0111CRMDB vxfs: [ID 779698 kern.notice] GLM recovery : gen 103c727 mbr 1 0 0 0 flags 0 Aug 16 03:28:28 P0111CRMDB vxfs: [ID 702911 kern.notice] NOTICE: msgcnt 2 mesg 125: V-2-125: GLM restart callback, protocol f lag 0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,90 (ssd618): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 12813040 Error Block: 12813 040 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386244F Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB vxvm:vxconfigd: [ID 702911 daemon.notice] V-5-1-7899 CVM_VOLD_CHANGE command received Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,a9 (ssd987): Aug 16 03:28:28 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 55124544 Error Block: 55124 544 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862106 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,94 (ssd614): Aug 16 03:28:28 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 65844000 Error Block: 65844 000 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862450 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB vxvm:vxconfigd: [ID 702911 daemon.notice] V-5-1-13170 Preempting CM NID 1 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438645,97 (ssd7): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 69328 Error Block: 69328 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862051 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,ae (ssd982): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 14741104 Error Block: 14741 104 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862507 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438647,93 (ssd170): Aug 16 03:28:28 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 65844000 Error Block: 65844 000 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862050 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438647,8c (ssd177): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 69475488 Error Block: 69475 488 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386244E Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,97 (ssd611): Aug 16 03:28:28 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 69328 Error Block: 69328 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862051 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,84 (ssd630): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 39521056 Error Block: 39521 056 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386244C Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438647,ab (ssd913): Aug 16 03:28:28 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 31795296 Error Block: 31795 296 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862906 Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:28 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,ac (ssd984): Aug 16 03:28:29 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 14757984 Error Block: 14757 984 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862D06 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,7f (ssd635): Aug 16 03:28:29 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 39857696 Error Block: 39857 696 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386204B Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,87 (ssd627): Aug 16 03:28:29 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 17532480 Error Block: 17532 480 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386204D Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438657,86 (ssd628): Aug 16 03:28:29 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 39816976 Error Block: 39816 976 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862C4C Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438645,80 (ssd30): Aug 16 03:28:29 P0111CRMDB Error for Command: read(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 50250576 Error Block: 50250 576 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 04386244B Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@1a,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438655,93 (ssd456): Aug 16 03:28:29 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 65844240 Error Block: 65844 240 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Vendor: HITACHI Serial Number: 50 043862050 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] ASC: 0x2a (reservations released), ASCQ: 0x4, FRU: 0x0 Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.warning] WARNING: /ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8005 438645,92 (ssd12): Aug 16 03:28:29 P0111CRMDB Error for Command: write(10) Error Level: Retryable Aug 16 03:28:29 P0111CRMDB scsi: [ID 107833 kern.notice] Requested Block: 65843280 Error Block: 65843 280Solved4.5KViews1like9Commentsadding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question:can i add the disks to the DG and volumes to RVGwhile the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlinkto add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.4KViews0likes14Comments