All the File System Backups, Sizes. Mainly client name, date of last full backup, size of backed up data.
Hi, I want to get all the File System Backups (No DB's), Sizes. Mainly client name, date of last full backup and size of backed up data. Please help me in furnishing this by script. Immeidate response is much appreciated. Thank you.Solved563Views10likes1CommentHuge size of /usr folder on backup server
I have a query related to the size of /usr folder on backupserver. Please see the following logs. bash-3.2# pwd / bash-3.2# du -sh * | grep G 1.1G opt 11G proc 2.0G storage_db 12G usr 3.0G var Note the size of /usr directory. Now check inside /usr bash-3.2# du -sh * | grep G 8.7G openv Now check inside openv bash-3.2# du -sh * | grep G 4.0G db 1.9G netbackup 2.0G pack And now inside db bash-3.2# du -sh * | grep G 2.0G data 1.9G staging Inside data bash-3.2# ls -ltr total 4099338 rw------ 1 root bin 2433024 Sep 16 2012 NBAZDB.db.template rw------ 1 root root 0 Feb 24 2014 vxdbms_conf.lock rw------ 1 root root 26218496 Feb 24 2014 DARS_INDEX.db rw------ 1 root root 26218496 Feb 24 2014 DARS_DATA.db rw------ 1 root root 26218496 Feb 24 2014 SEARCH_INDEX.db rw------ 1 root root 26218496 Feb 24 2014 SEARCH_DATA.db rw------ 1 root root 26218496 Dec 12 01:00 BMR_INDEX.db rw------ 1 root root 617 Apr 7 02:01 vxdbms.conf rw------ 1 root root 2441216 Apr 7 23:54 NBAZDB.db rw------ 1 root root 4096 Apr 7 23:54 NBAZDB.log rw------ 1 root root 26218496 Apr 8 00:01 EMM_INDEX.db rw------ 1 root root 1071763456 Apr 8 00:05 EMM_DATA.db rw------ 1 root root 26218496 Apr 8 00:05 DBM_DATA.db rw------ 1 root root 26218496 Apr 8 00:05 DBM_INDEX.db rw------ 1 root root 26218496 Apr 8 00:05 JOBD_DATA.db rw------ 1 root root 1376256 Apr 8 00:06 BMRDB.log rw------ 1 root root 3588096 Apr 8 00:06 NBDB.db rw------ 1 root root 5836800 Apr 8 00:06 BMRDB.db rw------ 1 root root 773881856 Apr 8 00:06 BMR_DATA.db rw------ 1 root root 327680 Apr 8 00:06 NBDB.log Queries: Why the size of following folders is keep on growing /usr/openv/pack /usr/openv/db/staging /usr/openv/db/data Is there any truncation procedure. System is already at 90% and since the above folders are growing it will reach 100% eventually. What are the recommendations on it from Symantec.Solved1.1KViews8likes5CommentsSFHA Solutions 6.0.1 (Solaris): Troubleshooting the dgdisabled error flag
The dgdisabled error flag indicates that configuration changes on a disk group are disabled. This can occur due to any error that prevents further configuration changes on the disk group. For example, this can occur if no good disks are found during the disk group import operation, if no valid configuration copies are found on the disks in the disk group, or if writes to all configuration copies fail during an update to the disk group configuration. The dgdisabled error flag displays when the Veritas Volume Manager configuration daemon, vxconfigd loses access to all enabled configuration copies for the disk group. Configuration copies let you back up and restore all configuration data fordisk groups, and for objects such as volumes that are configured within the disk groups. Loss of access can occur if power is disrupted or a network cable is disconnected. To recover from loss of access, fix any disk connectivity issues, then deport and re-import the disk group. Beginning with the Storage Foundation and High Availability (SFHA) Solaris 6.0 release, a node can join the cluster even if there is a shared disk group that is in the DGDISABLED state. In earlier releases the node failed to join the cluster. For more information on troubleshooting the dgdisabled error flag, see: Removing the error state for simple or nopriv disks in non-boot disk groups vxdarestore(1m) 6.0.1 manual page: Solaris For more information on using the vxdisk list command to display status and troubleshoot disk errors, see the following Symantec Connect article: SFHA Solutions 6.0.1: Using the vxdisk list command to display status and to recover from errors onVeritas Volume Manager disks Veritas Storage Foundation and High Availability documentation for other releases and platforms can be found on theSORT website.356Views5likes0CommentsDisk volume is full
This is the layout: Master Server: Solaris 10 NBU 7.1.0.3 Media Server 1 & 2: Linux 2.6 NBU 7.1.0.3 On the master NBU GUI under reports>problems (volume Media1_DD01:Internal_16 is down (Not ok on root) as well as (volume Media1_DD02:Internal_16 is down (Not ok on root) Next I have: >Activity Monitor Failure because of file write failed. >No storage units listed under >storage>storage units >database system error (Status 220) >Disk Reports>Disk Storage Unit Status >df -k reveals on the master server /dev/dsk/c0t1d0s3 70589121 70492427 0 100% /usr/openv >./vxlogview -p 51216 -o 111 -t 24:00:00 reveals EMM warning message on the master server File system for /usr/openv/db/data has 0.138 percent available and requires 1 percent. Not enough disk space on file system. Shutting down NetBackup database (NBDB) I know have to reduce the size of some log file or increase the log size but I don't know where, how, or what to do, still learning. Thanks for your advise!Solved3.6KViews5likes13CommentsNetbackup-compression of backup on tape (h/w level)
hello all We regularly take backup for SAP systems, recently i faced some abnormal behaviour.. The policies configured 1 year ago are able to compress backup h/w level(tape drive level), but when i configure new policy with all parameters same... it is not able to compress old policies can write 1.5 Tb data on single tape (IBM LTO 4 tape 800/1600Gb) , new policy is barely able to write 800Gb. what is wrong with me ... even if i copy old policy to create new ..it wont compress. what is the issue and the solution if any one have solved this issue.... on old policies software level compression is not enabled and still it compesses perfectly but new r not..??? Please Help!!!!!2.3KViews5likes31CommentsGEN_DATA file list directive for UNIX/Linux Performance Tuning
Hello Everyone. Stumped upon this tech note today and wanted to share it with you. Works on Solaris and Linux only. The tool works really great and can produce huge amount of random data in very short time. You can even "restore" the random generated data, however the restored data will be verified but else ignored. Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning http://www.symantec.com/docs/TECH75213Solved1KViews5likes4CommentsHow much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentTapes are taking too much time to expire
Hi, Since past 3 days, we are facing anissue with the tape expiration. In our environment, We expires 30 tapes daily with bpexpdate command. Previously we used to expireeach tape within 5 minutes. But now it is taking almost 40 minutes to expire each tape. We haven't made any upgrades/activites in recent times. Suddenly since past 3 days, we are facing this issue. Please help. Details: Netbackup version: 7.0.1 Master server OS:SunOS 5.10 Media Server OS : AIX 6.1(2 media servers) Regards SaiSolved1.3KViews4likes13CommentsNetbackup Solaris Media Server to 5230 Appliance Migration
Environment : Solrais 10 Master Server with 7.6.0.1 version and 7.5.0.6 Media servers with Solaris 10. Clarification Needed As we are going to migrate our Physical master server to Virtual master server with the same name ( Catalog Migration) And Going to decommission our solaris media servers to Netbackup 5230 appliance . My Concern here is after catalog migration to new server (virtual master) am going to decommision the Media servers used in old master. Will it effects me in new infra for restores using old disk pools with new appliance and how it works. Or while decommission my old disk pools will also be get deleted ..?Solved2.8KViews4likes5CommentsCommunication errors..
Master : SOlaris 10, NBU 7.6.0.3 CLient : NetBackup-Solaris10 7.0 Media: NetBackup-HP-UX11.31 7.6.0.3 Master is able to communicate with client, howvere there is some issue in connectivity of media server and client. bptestbpcd is failing when it is executed from media server, also backups are failing with EC21. Unable to find out any thing. ping,telnet to bpcd, vnetd, pbx is working vice versa. bpclntcmd is also provindg appropriate output. I even have treid changing connect options from client attributes, but nothing has worked so far. earlier this was working. output of bptestbpcd attached. Please suggest.Solved1.4KViews4likes18Comments