SF 6.0 does not recognize SF 4.1 Version 120 simple disk?
I am currently preparing to exchange two old SLES 9 systems by new SLES 11 machines. These new ones have SF 6.0 Basic and are able to see and read (dd) the disks currently being in production by the SLES9 systems (SAN FC ones). The disks are Version 120 originally created and in use by SF4.1: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: simple hostid: example4 disk: name=isar1_sas_2 id=1341261625.7.riser5 group: name=varemadg id=1339445883.17.riser5 flags: online ready private foreign autoimport imported pubpaths: block=/dev/disk/by-name/isar1_sas_2 char=/dev/disk/by-name/isar1_sas_2 version: 2.1 iosize: min=512 (bytes) max=1024 (blocks) public: slice=0 offset=2049 len=33552383 disk_offset=0 private: slice=0 offset=1 len=2048 disk_offset=0 update: time=1372290815 seqno=0.83 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=1481 logs: count=1 len=224 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-001498[001250]: copy=01 offset=000231 enabled log priv 001499-001722[000224]: copy=01 offset=000000 enabled # vxdg list varemadg | grep version version: 120 But when I look in the new systems SF6.0 does not recognize the diskgroups at all: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: auto info: format=none flags: online ready private autoconfig invalid pubpaths: block=/dev/vx/dmp/isar1_sas_2 char=/dev/vx/rdmp/isar1_sas_2 guid: - udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC site: - When I am doing a hexdump of the first few sectors it looks pretty much the same on both machines. According to articles like TECH174882 SF6.0 should be more than happy to recognize any disk layout between Version 20 and 170. Any hints what I might be doing wrong?Solved4.6KViews1like18CommentsMaxuproc not get updated even after reboot
Hi, Got a update to change the "maxuproc for wt82369 by 1.5 times" , While verifying we make necessary modification on the Global (wt81958). Normally there is a relation between max_nprocs value and maxuproc value. FYI.. maxuprc = max_nprocs – reserved_procs (default is 5) In this case we modified the max_nprocs value from 30000 to 50000 FYI.. [root@wt81958 GLOBAL] /etc # cat /etc/system | grep max_nprocs set max_nprocs=50000 After the global zone reboot the value is not updated while we hit sysdef [root@wt81958 GLOBAL] /root # sysdef | grep processes 30000 maximum number of processes (v.v_proc) 29995 maximum processes per user id (v.v_maxup) Can anyone please assist us if any thing we missed in this to make the necessary changes to replicate. Awaiting for your valuable suggestions. Thanks, senthilsamSolved3.2KViews0likes3CommentsVxFS v6.0.01 Fails on RHEL 6.3
I am unable mount the VxVM volumes with VxDF file ystems on RHEL 6.3 (2.6.32-279.11.1.el6.x86_64). # rpm -q VRTSvxfs VRTSvxfs-6.0.100.200-RHEL6.x86_64 # mount /vol01 UX:vxfs mount.vxfs: ERROR: V-3-22168: Cannot open portal device: No such file or directory UX:vxfs mount.vxfs: ERROR: V-3-25255: mount.vxfs: You don't have a license to run this program # lsmod | grep vx vxspec 3366 8 vxio 3763980 1 vxspec vxdmp 408656 5 vxspec,vxio # ls -la /dev/vxportal ls: cannot access /dev/vxportal: No such file or directory I have installed the latest patch available. Please help. Thanks RamasshSolved2.4KViews3likes10CommentsVersions of Veritas Volume manager
I am running Verias Volume manager on HPUX, version is B.05.10.01. There is embedded java that is version 1.6.0.06. Is there a more recent version that would bump up my Java to a higher version? Thanks Also, removing the embedded java would work.Solved2.3KViews1like6CommentsVolume Manager 5.1sp1 vea gui
I have upgraded a solaris9 machine from vxvm 4 to 5.1sp1. I followed the recommendations to install the VRSTobgui compents for VEA, since it's no longer packaged with 5.1. /etc/init.d/isisd does not start on machine boot. When I start it manually, I get a "Error reading registry" I notice that the server no longer has the VRTSobc directory in /opt. There is a VRTSob however. This machine previously had Cluster server running on it, I removed it before I upgraded the Volume Manager. Volume Manger is function perfectly, just trying to get the VEAgui by x-window to work again. thanks2.2KViews0likes9CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from 'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would present the LUNs on the new SAN to my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsDoubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentUpgrade of server
Hi all ! I have some outdated RHEL5 installations with older vxfs 5.xx on. Some weeks back I popped out one of the mirrored OS disks and installed both RHEL7 and vxfs 6.2.1. I found no way to mount the old filsystems after the installation was done. No devices under /dev/vx/dsk or /dev/vx/rdsk. I could see the disks with 'vxdisk list' and the older diskgroups with vxdg ( don't remember the exact syntax I used ). Disk layout was already 7 on all filsystems, wich seems to be supported in 6.2.1. At this moment - I felt lost, wasn't going to risk loosing any of the data by experimenting. So, I reverted the changes and booted up in RHEL5 again. I'm quite sure that the task above could be solved - could someone please point me in the right direction ? /Sverre.Solved1.6KViews0likes6CommentsHow do you tell if Veritas Storage Foundation is running SP1 RP3?
We have a request to upgrade all of our SF to SP1RP3. Some may already be at this level. How do you verify? When running a swlist on the HP-UX 11.31 server: VRTSdbed 5.1.100.000 Veritas Storage Foundation for Oracle by Symantec VRTSsfmh 3.1.429.0 Veritas Storage Foundation Managed Host by SymantecSolved1.5KViews0likes2Comments