Thoughts and takeaways from Vision 2017
Wow. The second Veritas Vision conference is over. And what a week it was. Press conferences, technology announcements, great keynotes, surprising demos, valuable breakout sessions, and a list of sponsors that any conference would be lucky to have. To name a few: Microsoft, Google, IBM, and Oracle. So, what was the takeaway? -that Veritas is maniacally focused on helping organizations better utilize this asset that exists in each of their environments known as secondary data. As Veritas CPO, Mike Palmer iterated on stage, “secondary data is not only the most under-utilized asset in the enterprise, it’s also the biggest opportunity.” For more than two decades Veritas has lead the market in the areas of enterprise backup and recovery, software-defined storage, and eDiscovery/archiving. However, as the world continues to both create and depend on more and more data, a better way to harness the power of their information is clearly needed. The largest data estate in our customers' environments is their secondary data. By eliminating complexity from point products, making storage smarter and more scalable, and by giving customers unprecedented levels of visibility into their data, they can not only protect it better, reduce risk more effectively and lower the total costs of managing it all, but they can unlock the value that lies therein to uncover new insights that drive business forward. And as our customers continue to embrace new architectures like cloud, OpenStack, and containers, or new workloads like NoSQL, or Big Data, the continued commitment to support them with the kind of industry leading technology, innovation, and engineering prowess was also clear. I’m proud of what I both saw and helped create this year at Veritas vision. There was a lot of excitement and optimism regarding Veritas’ outlook and vision for the future. The Cube was onsite hosting a number of interesting conversations, one with me and Ian below on what we aim to solve for customers with 360 data management for the multi-cloud: The Challenges We Aim To Solve: As part of a community give-back, we got to make Capes for Heros: http://capes4heroes.com, which was very special. And we got some great coverage on our announcements as well: CRN: Veritas Vision: Veritas Enhances Multi-Cloud Data Management, Strengthens Ties With Microsoft Azure Cloud TechTarget: Veritas 360 data management embraces cloud, dedupe, NoSQL Also great to see the positive comments on social from our AR/PR community as well: The simple truth about public cloud: In the end, it is still your responsibility for the app and data, not the provider. #vtasvision — @edwinyuen Kudos @Lylucas on very smart keynote - strong narrative, clear news, competitive digs clever/tasteful, showed momentum #VTASvision — jbuff Most clever monologue on #dataprotection ever by Mike Palmer #VTASvision — jbuff Still a good crowd at day 2 #vtasvision. — @krmarko Veritas senior execs take questions. Useful perspectives on selling the new strategy: focused on particular biz data problems. #vtasvision — @krmarko With #Veritas in #LasVegas to resolve the '#Information' #Tech conundrum to drive value and avert risk. #MultiCloud #GDPR #DX #VtasVision — @sabhari_idc Yes, we're closer to #Hollywood and it had a part it the dramatic #BackToTheFuture theme intro into#data management. #NoFate #VtasVision — @sabhari_idc The modern #data protection capability matrix demonstrated by #Veritas at #VtasVision. #BaaS#DRaaS #Containers #MultiCloud #BigData #DX — @sabhari_idc Veritas can reach APAC customers via GDPR zd.net/2fBezbU by @eileenscyu — @ZDNet Vision was a success. But now it’s back to the grind in helping customers solve real challenges with comprehensive solutions, and a vision to help them better utilize their secondary data. Keep a pulse on what’s going on with Veritas by following us on twitter at: @veritastechllc. Alex2.5KViews4likes0CommentsSymantec Netbackup 7 licensing
Hi All, I'd like to know if I'm getting this correctly. I'm planning to get Netbackup 7 Starter Edition according to b-symc_netbackup_7_core_DS_12995286-2.en-us.pdf (http://eval.symantec.com/mktginfo/enterprise/fact_sheets/b-symc_netbackup_7_core_DS_12995286-2.en-us.pdf) Table 2. the Edition comparison the Starter Edition can have 5 Server (supported clients) does this means i can have the following backup items: 1x ESXi 4 server (using vStorage API for inline deduplication - Change Block Tracking) 1x Solaris SPARC + Oracle DB 10gR2 1x Linux 1x Windows Server + Share Point 2007 (Granular restore) 1x Windows Server + Exchange Server 2007 (Granular restore) all of the above with bare metal restore capability. Since it is said that i could have: Server Clients, Tape Drive and Database Agents included ? yes I understand that Deduplication is not included here since it will need another license to spend. I'm getting confused here of which product that i should get :-| Would be good if anyone can clarify this please ?Solved1.2KViews4likes6CommentsNetBackup Enhancements for Oracle VLDBs (Very Large Databases)
In this day and age, data tends to only increase in size. For our customers with ever-growing Oracle databases (DB), timely backups and restores are a challenge. We have many existing features within NetBackup to protect Oracle, and now we have added a solution for Oracle Very Large DBs (VLDB). Figure 1. Oracle policy option to select multiple MSDP storage units The designation of “Very Large” is arbitrary, but a widely accepted definition is a database that is too large for the data protection to succeed in the desired time window. Oracle DB protection struggles are focused on completing backups faster, but the ability to restore within the expected time frame is often ignored, resulting in missed Restore Time Objectives (RTOs). This new Oracle policy option allows segmenting the Oracle backup across multiple NetBackup MSDPs (Media Server Deduplication Pools) storage units with the ability to choose which storage units are used (see Figure 1). For a single stream, this results in a backup image that is in the catalog as a single image, with the image’s fragments tracked in the catalog on each storage unit selected. A single backupid makes tracking and reporting streamlined. You can also increase the number of parallel streams. Allowing simultaneous writes to multiple disk pools increases the efficiency of the streaming backup. The number of storage units to use for the best results will vary from one database to the next. Further, take advantage of multiple parallel streams to further tune your Oracle backup. Storage units are linked to disk pools, and the most effective use of this option will leverage multiple storage units that are linked to multiple unique disk pools hosted on different media servers. This solution also works with where the nodes of a single Flex Scale are managed independently and only one storage unit is presented to NetBackup. There will be affinity for the database backup to write the same file, or piece, of the database backup to the same MSDP storage unit, so do not change this configuration often. Some of the considerations will be: Number of nodes in a RAC (Real Application Cluster), Number of instances, Number of parallel streams As more parallel streams and storage units are configured for the policy, the gain in performance is geometric improvements in the backup times. This must be aligned to your backup and restore goals and your existing infrastructure to avoid creating bottlenecks in the performance. To meet the desired goals, there may be a need to add more MSDP pools rather than having a few large pools each with a single node. Additionally, consider more load balancing media servers to also share data movement responsibilities. This solution can also use Storage Lifecycle Policies (SLPs) as multiple storage targets, enabling you to maintain your current business continuity and DR (Disaster Recovery) plans for your Oracle data. When selecting multiple storage units, you would select a different SLP (Service Lifecycle Policy) for each destination. If the desired SLP is not shown, confirm that it is using supported MSDP storage units. It is key that the SLPs for this use-case all be designed and configured with the same goals, including retention levels, and different source and target storage units. For example, if splitting the Oracle backup across 2 SLPs, each SLP would use a different backup storage unit, and a different secondary operation storage unit. In the case of replications (AIR (Auto Image Replication)), the replication relationship between the on-prem MSDP and the target MSDP each needs to be under the same target primary server (see Figure 2). It is possible to replicate many-to-one, but this would remove the benefit of segmenting the image across multiple disk pools. If the replication targets of only a portion of the database went to a different NB domain or were not retained for the same period, the image would not be complete and a restore would not be possible. Figure 2. SLP configuration requirement for multiple MSDP storage units with replication When the need for a restore arises, NetBackup takes advantage of multiple sources to read and stream back to the destination, with each disk pool reading a piece of the database image simultaneously. This results in a faster restore time for a large amount of data. The Disk Pool’s Maximum I/O streams setting will need to be adjusted according to the peak performance of the disks and the potential number of restore and backup streams. This setting can be changed dynamically, without a restart of services, to meet changing needs. Consider, also, the impact of load balancing media servers in such a configuration. If all media servers have credentials to all storage servers, they can potentially perform the requested read action during a replication or restore operation. In circumstances where some media servers are already busy doing other jobs, such as backups, NetBackup will choose the next-least-busy media server to perform the requested action. For most situations, it will be best to configure only the media servers needed for these repetitive actions to have access to the disk pools in-use. Plan disk pool deployment to maximize the throughput from the Oracle clients to multiple media servers and their NetBackup deduplication volumes. Take advantage of this new parallelism to improve throughput performance for both your backup and recovery operations for your Very Large Oracle databases to meet your strict RTOs.1.7KViews3likes0CommentsNetbackup Questions for level 1 and level 2
Hi All, Kindly share the netbackup administrator interview questions for L1 and L2 engineer. If any one have link kindly share. Netbackup 6.5 and 7.0 And also, share the all documetns link for netbackup7.0 Thanks and Regds, SagarSolved23KViews3likes7CommentsIn Netbackup Realtime ,when I discover a host,It's report error
In Netbackup Realtime ,when I discover a host,It's report error, First I add a host in Realtime console then I install Netbackup Realtime agent on a windows 2003 x64 server,It's run Oracle 10gR2,SFHA/DR and VVR 5.1 and NetBackup client 7.5 I have a emulex 8GB HBA and IBM DS3512 storage for Oracle Server last In the NetBackup Realtime gui console ,I click 'discover host',I got a error "V-265-330 Failed to query/add HBA assets for host cwdb01.cw.com"。 Can any one give a help?2KViews2likes0CommentsOracle Backup Issue | NBU ver 6.5
Hi Guru's Recently had issue with an oracle backup, Sorry if the information given is insufficient but will try my best. As per what the structure is currently (Provided with the backup selection list in bracket): 1st DB Backup Runs (/usr/local/nb/scripts/oracle_rman_hot.uaedb1) 2nd RMAN Logs (Backup selection list is empty) 3rd DB Files ( / ) 4th OS Files ( / ) ========================= When a Oracle backup is triggered, the others gets triggered automaticly until it completes. And all these are in a different policy: e.g 1.DB Backup = (DB_ORA_UAE) 2.RMAN Logs = (UAE_RMAN_LOGS_DW) 3.DB Files = (DB_Files) 4.OS Files = (FS_UAE_UNIX_1) ========================== My concern is, how to make the rman log, db files and os files jobs to run automated upon triggering the DB backup? is it by script? On another note, DBA team informed me that all the backups in order, but i cant seem to see that in netbackup. Thank You Guru's1.1KViews2likes5CommentsOEL 6.1 with SF 5.1 SP1 PR2 P1 install doesn't work
Hello, I tried to install SF 5.1 SP1 PR2 on Oracle Enterprise Linux 6.1 I got this error: Logs are being written to /var/tmp/installer-201109191328Uoa while installer is in progress Starting SF: 90% _______________ Estimated time remaining: 0:05 10 of 11 Performing SF configuration .............................................................................................................................. Done Starting vxdmp ........................................................................................................................................... Done Starting vxio ............................................................................................................................................ Done Starting vxspec .......................................................................................................................................... Done Starting vxconfigd ....................................................................................................................................... Done Starting vxesd ........................................................................................................................................... Done Starting vxrelocd ........................................................................................................................................ Done Starting vxconfigbackupd ................................................................................................................................. Done Starting vxdbd ........................................................................................................................................... Done Starting vxodm ........................................................................................................................................... Done Veritas Storage Foundation Startup did not complete successfully vxdmp failed to start on edev vxio failed to start on edev vxspec failed to start on edev vxconfigd failed to start on edev vxesd failed to start on edev vxodm failed to start on edev It is strongly recommended to reboot the following systems: edev Execute '/sbin/shutdown -r now' to properly restart your systems After reboot, run the '/opt/VRTS/install/installsf -start electra-dev' command to start Veritas Storage Foundation installer log files, summary file, and response file are saved at: I found the following article about the bug: https://sort.symantec.com/patch/detail/5186/ I upgraded the vxvm with this patch vm-rhel6_x86_64-5.1SP1PR2P1. I got the following error message: [root@edev rpms]# rpm -Uvh VRTSvxvm-5.1.120.100-SP1PR2P1_RHEL6.x86_64.rpm Preparing... ########################################### [100%] stopping vxrelocd stopping vxconfigbackupd stopping vxnotify 1:VRTSvxvm ########################################### [100%] Installing file /etc/init.d/vxvm-boot creating VxVM device nodes under /dev Error in loading module "vxdmp". See documentation. WARNING: Unable to load new drivers. You should reboot the system to ensure that new drivers get loaded. I doesn't work after reboot too. I found a similar differences between version of SF kernel module and the Oracle factory default kernel version. #pwd /etc/vx/kernel [root@electra-dev kernel]# ls dmpaaa.ko.2.6.32-71.el6.x86_64 dmpap.ko.2.6.32-71.el6.x86_64 dmpnetapp.ko.2.6.32-71.el6.x86_64 vxfs.ko.2.6.32-71.el6.x86_64 dmpaa.ko.2.6.32-71.el6.x86_64 dmpCLARiiON.ko.2.6.32-71.el6.x86_64 dmpsun7x10alua.ko.2.6.32-71.el6.x86_64 vxio.ko.2.6.32-71.el6.x86_64 dmpalua.ko.2.6.32-71.el6.x86_64 dmpEngenio.ko.2.6.32-71.el6.x86_64 dmpsvc.ko.2.6.32-71.el6.x86_64 vxodm.ko.2.6.32-71.el6.x86_64 dmpapf.ko.2.6.32-71.el6.x86_64 dmphuawei.ko.2.6.32-71.el6.x86_64 fdd.ko.2.6.32-71.el6.x86_64 vxportal.ko.2.6.32-71.el6.x86_64 dmpapg.ko.2.6.32-71.el6.x86_64 dmpjbod.ko.2.6.32-71.el6.x86_64 vxdmp.ko.2.6.32-71.el6.x86_64 vxspec.ko.2.6.32-71.el6.x86_64 # uname -a Linux edev 2.6.32-100.34.1.el6uek.x86_64 #1 SMP Wed May 25 17:46:45 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux Is it important or not? How can I build a right SF kernel modul version for OEL 6.1? # rpm -qa | grep VRTS VRTSspt-5.5.000.006-GA.noarch VRTSvxfs-5.1.120.000-SP1PR2_RHEL6.x86_64 VRTSvxvm-5.1.120.100-SP1PR2P1_RHEL6.x86_64 VRTSperl-5.10.0.0-RHEL6.0.x86_64 VRTSob-3.4.291-0.i686 VRTSdbed-5.1.120.000-SP1PR2_RHEL6.i686 VRTSsfmh-3.1.429.0-0.i686 VRTSvlic-3.02.52.000-0.x86_64 VRTSaslapm-5.1.120.000-SP1PR2_RHEL6.x86_64 VRTSodm-5.1.120.000-SP1PR2_RHEL6.x86_64 thank you in advance xid123412Views2likes0CommentsComplete Backup solution for Solaris 10 server (BESR equivalent for non Windows OS).
Hi All, I'd like to know if there is a complete backup solution for Solaris 10 environment which backup whole complete system like Symantec Backup Exec System Recovery 2010 does on Windows Server (complete disk image backup on the fly / hot backup). At the moment I just realized that using Backup Exec 2010 + RALUS agent (the agent), it can only backup the file only not even the open files :-( which makes it useless when disaster strikes. My goal is to be able to restore the server from the backup drive (image deployment type) when there is hardware failure. Any idea and suggestion please ? Thanks.655Views2likes2CommentsSFHA Solutions 6.2: Symantec Storage plug-in for OEM 12c
Symantec Storage plug-in for OEM12c enables you to view and manage the Storage Foundation and VCS objects through the Oracle Enterprise Manager 12c (OEM), which has a graphical interface. It works with the Symantec Storage Foundation and High Availability 6.2 product suite. The Symantec Storage plug-in allows you to: SmartIO: manage Oracle database objects using SmartIO. Snapshot: create point-in-time copies (Storage Checkpoint, Database FlashSnap, Space-optimized Snapshot, and FileSnap) of Oracle databases using SFDB features Cluster: vew cluster-specific information. You can get the plug-in by downloading the attached file. For more information about installing and using the plug-in, download the attached Application Note. Terms of use for this information are found in Legal Notices.3.1KViews1like0Commentsbpflist returns "no entity was found" for existing backup id
Background: I have an Oracle 10gR2 database that runs on a Solaris 10 server. Backups go through NetBackup 7.1 which runs on a Windows server. When I run the bpimmedia command for my production Oracle database, I see information like this: bpimmedia -policy ORA_PASPROD -L Backup-ID Policy Type RL Files C E T PC Expires Copy Frag KB Type Density FNum Off Host DWO MPX Expires RL MediaID ------------------------------------------------------------------------------------------------ st31bora01_135409264 ORA_PASPRO UBAK 0 1 N N R 1 01:50 12/05/2012 1 1 1075488 Disk - - - inf-srv17 - N 01:50 12/05/2012 0 @aaaac 2 1 1075488 Disk - - - inf-srv17 - N 01:50 12/05/2012 0 @aaaae 3 1 1075488 RMed hcart 329 23495432 inf-srv17 2 N 01:50 12/05/2012 0 APA159 ... st31bora01_135262369 ORA_PASPRO UBAK 1 1 N N R 3 01:48 12/12/2012 3 1 102432 RMed hcart 78 8086718 inf-srv17 1 N 01:48 12/12/2012 3 APA208 On the first backup ID, I have a copy on the local DataDomain (Media ID @aaaac), a copy of the DataDomain at the disaster recovery site (MediaID @aaaae) and a copy on physical tape (MediaID APA159). For the second backupid shown, I see only one copy on physical tape (MediaID APA208). I need to be able to see the actual file(s) that are within these backups. I have searched on this issue and found replies that reference the "bpflist" command. I can run the command without specifying a backup ID and get about 900 line returned. Unfortunately when I try to specify a backup ID, I consistently get "no entity was found". The following commands were run from D:\Program Files\Veritas\NetBackup\bin\admincmd on my NetBackup 7.1 admin server. -- Show the command works bpflist -client st31bora01 FILES 8 4 0 1354093506 2 st31bora01 ORA_PASPROD st31bora01_1354093506 - *NULL* 1 0 unknown unknown 0 0 pasprod 1 28049408 33 130 0 1 0 0 -1 /PASPROD_c-1968089396-20121128-01 33200 oraprod dbaprod 28049408 1354093500 1354093500 1354093500 1 0 0 15 13 0 0 3 1354093500 1 4 Oracle Database Oracle Backup ... and so on for 900 lines ... -- Try client and backupid bpflist -client st31bora01 -backupid st31bora01_135409264 no entity was found -- Try policy and backupid bpflist -policy ORA_PASPROD -backupid st31bora01_135409264 no entity was found -- Try backupid with some dates bpflist -backupid st31bora01_135409264 -d 01/01/2010 00:00:00 -e 11/28/2012 09:08:00 no entity was found -- Maybe the dates reference the expiry time, put the end 2 years into the future bpflist -backupid st31bora01_135409264 -d 01/01/2010 00:00:00 -e 11/28/2014 09:08:00 no entity was found -- Try adding policy type of Oracle bpflist -backupid st31bora01_135409264 -d 01/01/2010 00:00:00 -e 11/28/2014 09:08:00 -pt Oracle no entity was found -- Policy type of Oracle and client but no dates bpflist -backupid st31bora01_135409264 -client st31bora01 -pt Oracle no entity was found -- put the dates back in bpflist -client st31bora01 -backupid st31bora01_135409264 -d 01/01/2010 00:00:00 -e 11/28/2014 09:08:00 -pt Oracle no entity was found -- Try just the -d date that exactly matches the date from the bpimmedia command bpflist -backupid st31bora01_135409264 -client st31bora01 -d 12/05/2012 invalid command parameter -- "invalid command parameter" , weird So I'm stumped. What does it take to get the bpflist to work? KenSolved2KViews1like5Comments