ContributionsMost RecentMost LikesSolutionsRe: BE2012 : deduplication and OST not proposed on install Aha, thanks for your help both. I am testing in a virtual machine ... with Windows 2003 ... x86 ! So I have my answer. I'll do the test on a x64 machine and back to you after. BE2012 : deduplication and OST not proposed on install Hi all, While testing BE2012 installation vs BE1020 R3, I can't find the checkbox options I had for OST - open Storage, nor for deduplication. I may understand that deduplciation could be included the BE2012 code .. why not ... but the OST is not presented here. I relaunched the setup after I installed BE2012, and the fsost plugin (FalconStor), but there are no additional checkbox from the installation menu :( Thx all Re: Optimized Duplication - How to switch it off ? Thanks for this workaround. I like the idea of STU Group to prevent opt dedupe. BTW, the error may come from the buffer size, different between disk and tape. What I don't understand yet is why duplication works from Tape to OST_DP, but not frm OST_DP to Tape. Optimized Duplication - How to switch it off ? Hi, can you help ? Using NBU 7.1.0.2 and OpenStorage we have a strange behavior : Backup done on Dedupe DiskPool : OK (verify and restore OK) Backup done on Tape : OK (verify and restore OK) Duplication from Tape to Dedupe Disk Pool : OK (verify and restore from copy 2 : OK - report images on media = same nbfiles and size) Duplication from Dedupe Disk Pool to Dedupe Disk Pool :OK (verify, restore, report on images : OKOKOK). Strange behavior during the duplication, it announces it will do an optimized duplication, as both source and dstination are deduped, and connected. I didnt have to configure anything. Problem is : Duplication from Dedupe Disk Pool to Tape : by bpduplicate, or console, or Vault, or SLP => Job says it's ok BUT : - restore rom copy 2 fails - verify from copy 2 fails : says there is more files in the database that on the image to verify - and indeed : report on images show that the amont of data present on original copy 1 is much larger in kb than copy 2 !!! Intuition NetBackup tries to read the data from Deduped DiskPool as if it wanted to use optimized duplication, but the destination cant manage this small flow of data. Question Is it possible to completely prevent Optimized duplication by bpsetconfig or in the bp.conf ? FYI The openstorage solution is FalconStor - there is a ticket open with both FalconStor and Symantec, but they don't feel comfortable with that specific case. It looks like using OST should always be done to duplicate optimized, and not toward a Tape !!! I don't have any other location to test the duplication, from OST to Basic Disk, for example. SolvedSchedule SLP for deactivation Duplication by SLP is not something you can schedule from the console. I will propose here two ways to workaround this matter Symantec didn't provide a tool for : one quite well known, and a new one, more elegant but also more discrete. First Method - scripting Re: What is the best way to correct a wrong policy name? Unix master : #mv /usr/openv/netbackup/db/class/oldname /usr/openv/netbackup/db/class/newname Windows master : #rename C:\program files\veritas\netbackup\db\class\oldname C:\program files\veritas\netbackup\db\class\newname If you want to be sure it's known immediately by netbackup, type : #nbpemreq -updatepolicies Re: NetBackUp 6.5.4 Percentage Complete not showing In the image database, ...NetBackup\db\images\<client>\<TimeStamps>\ 2 files are created at the end of a job backup. The smallest one is plaintext metadata. 5th line : KBYTES 6303776 8th line frmo bottom : ESTIMATED_KBYTES 7564531 This estimation is made for the NEXT job, and is exactly 1,20 * KBYTES (or +20% if you prefer). 1st time a job is done, there is no statistic to be based on. Next time, when NetBackup found where to place the images file, regarding the cilent name, policy name, and schedule type, it just has a look at this value in the most recent file. Imagine what stupid value you could get from sequent Incremental jobs ! Re: Heartbeat via SAN I had this tried under Solaris, with Emulex HBA, on VCS 4.1 Sending GAB/LLT packets over lpfc0 just configured as an ethernet NIC. I just had the /etc/llttab modified to point on lpfc:0 instead of eri:0 Not even UDP. The same fibre was used for accessing data disks on the array, but with a brocade switch which all nodes were connected to. That works .... but i didnt push that on production, nor analysed the traffic between data and VCS packets. However it seems that these GAB/LLT packets could cause problem with other hardware on the san, likeTape libraries libs. That's waht I had been told. I did all the test about jeopardy, low priority HB, and I/O fencing. It was fine. Cheers. Re: unable to expire old catalog tapes -- NB 6.0 MP3Inconsistency between old assigned media to catalog backups and version 6 EMM DB can be checked with this : nbemmcmd -listmedia -mergetable try th ebpexpdate after this. If it dos not work better, try to assign it. This is now done by : bptm -makedbentry -m -den -poolnum then again a try on bpexpdate. Vmquery does not wotk nor for assigning or deassigning media. Bplabel just writes on the tape but does not add anything in the database. Re: Offsite Storage - Automated Inventory of Library?Some options under vmupdate are very usefull. You have to be carefull between version 5 and 6, u can have surprises if the Master server is not controling the library : You may know the GUI allows you to specify MediaId Generation and Barcode Rules. In version 5, if you use the command line from a media server, the rules had to be configured locally on the system. From GUI, UNIX/Java is fine, but Windows uses local files. From version 6, no problem : all the rules are centralized on the master (EMM). = just to get information on what changes should be applied = where to put media/volumes that were previously known as inside the lib and not any more visible ... the Volume Group allows you to easily find the volumes later. (on te Shell, in the Vault, on SecondarySite .....)