vxconfigd core dumps at vxdisk scandisks after zpool removed from ldom
Hi I'm testing InfoScale 7.0 on Solaris with LDoms. Creating a ZPOOL in the LDom works. It seems there is something not working properly. On the LDom Console I see May 23 16:19:45 g0102 vxdmp: [ID 557473 kern.warning] WARNING: VxVM vxdmp V-5-3-2065 dmp_devno_to_devidstr ldi_get_devid failed for devno 0x11500000000 May 23 16:19:45 g0102 vxdmp: [ID 423856 kern.warning] WARNING: VxVM vxdmp V-5-0-2046 : Failed to get devid for device 0x20928e88 After I destroy the ZPOOL, I would like to remove the Disk from the LDom. To be able to do that I remove and disable the disk /usr/sbin/vxdmpadm -f disable path=c1d1s2 /usr/sbin/vxdisk rm c1d1s2 After this I'm able to remove the Disk from the LDom using ldm remove-vdisk. The dmp configuration is not cleaned up. # /usr/sbin/vxdmpadm getsubpaths ctlr=c1 NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================================ NONAME DISABLED(M) - NONAME OTHER_DISKS other_disks STANDBY c1d0s2 ENABLED(A) - c1d0s2 OTHER_DISKS other_disks - # If I run vxdisk scandisks at this stage, the vxdisk command hangs and the vxconfigd core dumps: # file core core: ELF 32-bit MSB core file SPARC Version 1, from 'vxconfigd' # pstack core core 'core' of 378: vxconfigd -x syslog -m boot ------------ lwp# 1 / thread# 1 --------------- 001dc018 ddl_get_disk_given_path (0, 0, 0, 0, 66e140, 0) 001d4230 ddl_reconfigure_all (49c00, 0, 400790, 3b68e8, 404424, 404420) + 690 001b0bfc ddl_find_devices_in_system (492e4, 3b68e8, 42fbec, 4007b4, 4db34, 0) + 67c 0013ac90 find_devices_in_system (2, 3db000, 3c00, 50000, 0, 3d9400) + 38 000ae630 ddl_scan_devices (3fc688, 654210, 0, 0, 0, 3fc400) + 128 000ae4f4 req_scan_disks (660d68, 44fde8, 0, 654210, ffffffec, 3fc400) + 18 00167958 request_loop (1, 44fde8, 3eb2e8, 1800, 19bc, 1940) + bfc 0012e1e8 main (3d8000, ffbffcd4, ffffffff, 42b610, 0, 33bb7c) + f2c 00059028 _start (0, 0, 0, 0, 0, 0) + 108 Thanks, Marcel1.7KViews0likes1CommentFailed to install EAT on system
Am evaluating the Infoscale Availability tool. While install over RHEL6.4(On VMware) getting error says that "Failed to install EAT on system". Tried to install several times after proper uninstall and getting the same. Tried to install on single server option even the same error. Can anyone point how to resolve this issue.Solved4.3KViews1like12CommentsSQL Server with Veritas HA
Hi All, I am configuring SQL Cluster using VERITAS HA and DR replication using VVR, I have configured the cluster and storage as per the cluster implimentation document with same sql instance installations but in SQL Server agent configuration wizard it gives an error stating there are no instances available to configure. can someone help me with a solution for the issue I'm facing? Windows Server 2012 R2 SQL Server version 2012 Veritas Infoscale version 7 BR,1.8KViews0likes1CommentNew Infoscale v7 Installation - Can't add Hosts
This is a new system of infoscale v7 on RHEL v6. Bidirectional port 5634 is open between the Infoscale management Server (RHEL) and host. (solaris 10 -sparc). Also one way port 22 is open from mgmt server to managed host. Host has VRTSsfmh running and listening on port 5634: solvcstst01:/etc/ssh {root}: ps -ef|grep xprtld root 3893 1 0 Mar 01 ? 0:47 /opt/VRTSsfmh/bin/xprtld -X 1 /etc/opt/VRTSsfmh/xprtld.conf root 7477 24284 0 08:28:34 pts/1 0:00 grep xprtld I've allowed (temporary) direct root login from the mgmt server to the managed host and entered those credentials. Error when adding host from infoscale server: "Registration with Management Server failed" Error log: Add Host Log ------------ Started [04/12/2016 08:30:23] [04/12/2016 08:30:23] [solvcstst01.vch.ca] type rh solvcstst01.vch.ca cms [04/12/2016 08:30:23] [solvcstst01.vch.ca] creating task for Add host [04/12/2016 08:30:24] [solvcstst01.vch.ca] Check if MH is pingable from MS and get vital information from MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "5.0.196.0", "LOCAL_NAME" : "solvcstst01.vch.ca", "LOCAL_ADDR" : "139.173.8.6", "PEER_NAME" : "UNKNOWN", "PEER_ADDR" : "10.248.224.116", "LOCAL_TIME" : "1460475024", "LOCALE" : "UNKNOWN", "DOMAIN_MODE" : "FALSE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "SunOS", "OSRELEASE" : "5.10", "CPUTYPE" : "sparc", "OSUUID" : "{00020014-4ffa-b092-0000-000084fbfc3f}", "DOMAINS" : { } } [04/12/2016 08:30:24] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:24] [solvcstst01.vch.ca] Checking if MH version [5.0.196.0] is same or greater than as that of least supported MH version [5.0.0.0] [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_PRECONFIG_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_PRECONFIG_CHK","STATE":"SUCCESS","PROGRESS":1}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] retrieving Agent password [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_INPUT_PARAM_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INPUT_PARAM_CHK","STATE":"SUCCESS","PROGRESS":6}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] user name is "root" [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_CONTACTING_MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_CONTACTING_MH","STATE":"SUCCESS","PROGRESS":20}} [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: HTTP/1.1 302 OK Status: 307 Moved Location: /admin/htdocs/cs_config.htm [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 768 [04/12/2016 08:30:25] [solvcstst01.vch.ca] Checking to see if CS is reachable from MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "7.0.0.0", "LOCAL_NAME" : "lvmvom01.healthbc.org", "LOCAL_ADDR" : "10.248.224.116", "PEER_NAME" : "solvcstst01.vch.ca", "PEER_ADDR" : "139.173.8.6", "LOCAL_TIME" : "1460475025", "LOCALE" : "en_US.UTF-8", "DOMAIN_MODE" : "TRUE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "Linux", "OSRELEASE" : "2.6.32-573.22.1.el6.x86_64", "CPUTYPE" : "x86_64", "OSUUID" : "{00010050-56ad-1e25-0000-000000000000}", "DOMAINS" : { "sfm://lvmvom01.healthbc.org:5634/" : { "admin_url" : "vxss://lvmvom01.healthbc.org:14545/sfm_admin/sfm_domain/vx", "primary_broker" : "vxss://lvmvom01.healthbc.org:14545/sfm_agent/sfm_domain/vx" } } } [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:25] [solvcstst01.vch.ca] CS host (lvmvom01.healthbc.org) is resolvable [04/12/2016 08:30:25] [solvcstst01.vch.ca] Trying to figure out if host is already part of the domain [04/12/2016 08:30:25] [solvcstst01.vch.ca] ADD_HOST_SEND_CRED_MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_SEND_CRED_MH","STATE":"SUCCESS","PROGRESS":30}} [04/12/2016 08:30:26] [solvcstst01.vch.ca] Output: SUCCESS [04/12/2016 08:30:26] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:28] [solvcstst01.vch.ca] push_exec command succeeded [/opt/VRTSsfmh/bin/getvmid_script] [04/12/2016 08:30:29] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:29] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":75}} [04/12/2016 08:30:29] [solvcstst01.vch.ca] Executing /opt/VRTSsfmh/bin/xprtlc -u "root" -t 1200 -j /var/opt/VRTSsfmh/xprtlc-payload-x2s4xFEb -l https://solvcstst01.vch.ca:5634/admin/cgi-bin/sfme.pl operation=configure_mh&cs-hostname=lvmvom01.healthbc.org&cs-ip=10.248.224.116&mh-hostname=solvcstst01.vch.ca&agent-password=****** [04/12/2016 08:30:32] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:32] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:32] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:33] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:33] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:33] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:45] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:45] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:45] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:58] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_die [04/12/2016 08:30:58] [solvcstst01.vch.ca] CONFIGURE_MH_REG_FAILED [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":-1,"ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"job_add_host","OUTPUT":"","STATE":"FAILED","PROGRESS":100}}{"RESULT":{"RETURNCODE":-1,"UMI":"V-383-50513-5760","ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"add_host","TASKID":"{iHUXu2IK1ZRkTo7H}"}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_deadSolved3.3KViews0likes3CommentsProblems installing Infoscale Storage
Hello, I am having a problem installing Veritas Infoscale Foundation on my openSUSE 12. Here follows the Iinstaller Summary: The follower warnings were discovered on the system: Veritas Infoscale Enterprise Install did not complete succesfully VRTSvxvm rpm failed to install on linux-kaoc VRTSaslapm rpm failed to install on linux-kaoc VRTSglm rpm failed to install on linux-kaoc VRTScavf rpm failed to install on linux-kaoc Can you help me with this issue ?Solved1.9KViews1like1CommentVeritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide InfoScale documentation for other platforms can be found on the SORT website.1.8KViews0likes0CommentsInfoscale cluster software installation aborted due to netbackup client
Hello All, Facing the issue with the Infoscale 7.0 Veritas RAC software installation, the installation got aborted with error, "CPI ERROR V-9-40-6501 Entered systems have different products installed: Product Installed - Product Version - System Name None - None - hostname InfoScale Enterprise - 7.0.0.000 - hostname Systems running different products must be operated independently The following warnings were discovered on the systems: CPI WARNING V-9-40-3861 NetBackup 7.6.0.4 was installed on hostname. The VRTSpbx rpms on hostname will not be uninstalled " Is thera anybody face this issue earlier. We have Linux 6.6 host where we get this error/issue. Need help to get this resolve.Solved2.6KViews0likes3CommentsRequests "by host" still shows "In Progress"
Using the "deployment ->hot Fixes" option to install package "vom-Patch-7.0.0.101" got frozen, the web application restarted and still shows as "In Progress" even after it came back. Checking theserver, the version was updated but if we check on the "deployment ->hot Fixes" its shows as "Not Installed" and "Requests" tab shows it as "In Progress" Can we stop/delete/update this taskso it wont show as it is?. the wizard wont allow cancelling as its "by Host" Type installation1.4KViews0likes0Comments