Veritas Presentation at the Data Storage Innovation Conference 2015
This year Rajagopal Vaideeswaran and I had the pleasure to represent Veritas at the Data Storage Innovation Conference in Santa Clara. This is the second edition of this conference and it provides great content and trends from different storage vendors. As we could see in the conference, the future of storage is about commoditization and software defined.1.4KViews7likes3CommentsSoftware Defined Storage at the Speed of Flash
Symantec and Intel introduced a Solution Overview for a storage management architecture using Software Defined Storage and Intel PCIe drives with NVMe support in another blog entry. The proposed solution offered better performance with a fraction of the costs of all flash arrays.1.4KViews6likes2CommentsThoughts and takeaways from Vision 2017
Wow. The second Veritas Vision conference is over. And what a week it was. Press conferences, technology announcements, great keynotes, surprising demos, valuable breakout sessions, and a list of sponsors that any conference would be lucky to have. To name a few: Microsoft, Google, IBM, and Oracle. So, what was the takeaway? -that Veritas is maniacally focused on helping organizations better utilize this asset that exists in each of their environments known as secondary data. As Veritas CPO, Mike Palmer iterated on stage, “secondary data is not only the most under-utilized asset in the enterprise, it’s also the biggest opportunity.” For more than two decades Veritas has lead the market in the areas of enterprise backup and recovery, software-defined storage, and eDiscovery/archiving. However, as the world continues to both create and depend on more and more data, a better way to harness the power of their information is clearly needed. The largest data estate in our customers' environments is their secondary data. By eliminating complexity from point products, making storage smarter and more scalable, and by giving customers unprecedented levels of visibility into their data, they can not only protect it better, reduce risk more effectively and lower the total costs of managing it all, but they can unlock the value that lies therein to uncover new insights that drive business forward. And as our customers continue to embrace new architectures like cloud, OpenStack, and containers, or new workloads like NoSQL, or Big Data, the continued commitment to support them with the kind of industry leading technology, innovation, and engineering prowess was also clear. I’m proud of what I both saw and helped create this year at Veritas vision. There was a lot of excitement and optimism regarding Veritas’ outlook and vision for the future. The Cube was onsite hosting a number of interesting conversations, one with me and Ian below on what we aim to solve for customers with 360 data management for the multi-cloud: The Challenges We Aim To Solve: As part of a community give-back, we got to make Capes for Heros: http://capes4heroes.com, which was very special. And we got some great coverage on our announcements as well: CRN: Veritas Vision: Veritas Enhances Multi-Cloud Data Management, Strengthens Ties With Microsoft Azure Cloud TechTarget: Veritas 360 data management embraces cloud, dedupe, NoSQL Also great to see the positive comments on social from our AR/PR community as well: The simple truth about public cloud: In the end, it is still your responsibility for the app and data, not the provider. #vtasvision — @edwinyuen Kudos @Lylucas on very smart keynote - strong narrative, clear news, competitive digs clever/tasteful, showed momentum #VTASvision — jbuff Most clever monologue on #dataprotection ever by Mike Palmer #VTASvision — jbuff Still a good crowd at day 2 #vtasvision. — @krmarko Veritas senior execs take questions. Useful perspectives on selling the new strategy: focused on particular biz data problems. #vtasvision — @krmarko With #Veritas in #LasVegas to resolve the '#Information' #Tech conundrum to drive value and avert risk. #MultiCloud #GDPR #DX #VtasVision — @sabhari_idc Yes, we're closer to #Hollywood and it had a part it the dramatic #BackToTheFuture theme intro into#data management. #NoFate #VtasVision — @sabhari_idc The modern #data protection capability matrix demonstrated by #Veritas at #VtasVision. #BaaS#DRaaS #Containers #MultiCloud #BigData #DX — @sabhari_idc Veritas can reach APAC customers via GDPR zd.net/2fBezbU by @eileenscyu — @ZDNet Vision was a success. But now it’s back to the grind in helping customers solve real challenges with comprehensive solutions, and a vision to help them better utilize their secondary data. Keep a pulse on what’s going on with Veritas by following us on twitter at: @veritastechllc. Alex2.5KViews4likes0CommentsSymantec Netbackup 7 licensing
Hi All, I'd like to know if I'm getting this correctly. I'm planning to get Netbackup 7 Starter Edition according to b-symc_netbackup_7_core_DS_12995286-2.en-us.pdf (http://eval.symantec.com/mktginfo/enterprise/fact_sheets/b-symc_netbackup_7_core_DS_12995286-2.en-us.pdf) Table 2. the Edition comparison the Starter Edition can have 5 Server (supported clients) does this means i can have the following backup items: 1x ESXi 4 server (using vStorage API for inline deduplication - Change Block Tracking) 1x Solaris SPARC + Oracle DB 10gR2 1x Linux 1x Windows Server + Share Point 2007 (Granular restore) 1x Windows Server + Exchange Server 2007 (Granular restore) all of the above with bare metal restore capability. Since it is said that i could have: Server Clients, Tape Drive and Database Agents included ? yes I understand that Deduplication is not included here since it will need another license to spend. I'm getting confused here of which product that i should get :-| Would be good if anyone can clarify this please ?Solved1.2KViews4likes6CommentsNetBackup Enhancements for Oracle VLDBs (Very Large Databases)
In this day and age, data tends to only increase in size. For our customers with ever-growing Oracle databases (DB), timely backups and restores are a challenge. We have many existing features within NetBackup to protect Oracle, and now we have added a solution for Oracle Very Large DBs (VLDB). Figure 1. Oracle policy option to select multiple MSDP storage units The designation of “Very Large” is arbitrary, but a widely accepted definition is a database that is too large for the data protection to succeed in the desired time window. Oracle DB protection struggles are focused on completing backups faster, but the ability to restore within the expected time frame is often ignored, resulting in missed Restore Time Objectives (RTOs). This new Oracle policy option allows segmenting the Oracle backup across multiple NetBackup MSDPs (Media Server Deduplication Pools) storage units with the ability to choose which storage units are used (see Figure 1). For a single stream, this results in a backup image that is in the catalog as a single image, with the image’s fragments tracked in the catalog on each storage unit selected. A single backupid makes tracking and reporting streamlined. You can also increase the number of parallel streams. Allowing simultaneous writes to multiple disk pools increases the efficiency of the streaming backup. The number of storage units to use for the best results will vary from one database to the next. Further, take advantage of multiple parallel streams to further tune your Oracle backup. Storage units are linked to disk pools, and the most effective use of this option will leverage multiple storage units that are linked to multiple unique disk pools hosted on different media servers. This solution also works with where the nodes of a single Flex Scale are managed independently and only one storage unit is presented to NetBackup. There will be affinity for the database backup to write the same file, or piece, of the database backup to the same MSDP storage unit, so do not change this configuration often. Some of the considerations will be: Number of nodes in a RAC (Real Application Cluster), Number of instances, Number of parallel streams As more parallel streams and storage units are configured for the policy, the gain in performance is geometric improvements in the backup times. This must be aligned to your backup and restore goals and your existing infrastructure to avoid creating bottlenecks in the performance. To meet the desired goals, there may be a need to add more MSDP pools rather than having a few large pools each with a single node. Additionally, consider more load balancing media servers to also share data movement responsibilities. This solution can also use Storage Lifecycle Policies (SLPs) as multiple storage targets, enabling you to maintain your current business continuity and DR (Disaster Recovery) plans for your Oracle data. When selecting multiple storage units, you would select a different SLP (Service Lifecycle Policy) for each destination. If the desired SLP is not shown, confirm that it is using supported MSDP storage units. It is key that the SLPs for this use-case all be designed and configured with the same goals, including retention levels, and different source and target storage units. For example, if splitting the Oracle backup across 2 SLPs, each SLP would use a different backup storage unit, and a different secondary operation storage unit. In the case of replications (AIR (Auto Image Replication)), the replication relationship between the on-prem MSDP and the target MSDP each needs to be under the same target primary server (see Figure 2). It is possible to replicate many-to-one, but this would remove the benefit of segmenting the image across multiple disk pools. If the replication targets of only a portion of the database went to a different NB domain or were not retained for the same period, the image would not be complete and a restore would not be possible. Figure 2. SLP configuration requirement for multiple MSDP storage units with replication When the need for a restore arises, NetBackup takes advantage of multiple sources to read and stream back to the destination, with each disk pool reading a piece of the database image simultaneously. This results in a faster restore time for a large amount of data. The Disk Pool’s Maximum I/O streams setting will need to be adjusted according to the peak performance of the disks and the potential number of restore and backup streams. This setting can be changed dynamically, without a restart of services, to meet changing needs. Consider, also, the impact of load balancing media servers in such a configuration. If all media servers have credentials to all storage servers, they can potentially perform the requested read action during a replication or restore operation. In circumstances where some media servers are already busy doing other jobs, such as backups, NetBackup will choose the next-least-busy media server to perform the requested action. For most situations, it will be best to configure only the media servers needed for these repetitive actions to have access to the disk pools in-use. Plan disk pool deployment to maximize the throughput from the Oracle clients to multiple media servers and their NetBackup deduplication volumes. Take advantage of this new parallelism to improve throughput performance for both your backup and recovery operations for your Very Large Oracle databases to meet your strict RTOs.1.7KViews3likes0CommentsNetbackup Questions for level 1 and level 2
Hi All, Kindly share the netbackup administrator interview questions for L1 and L2 engineer. If any one have link kindly share. Netbackup 6.5 and 7.0 And also, share the all documetns link for netbackup7.0 Thanks and Regds, SagarSolved23KViews3likes7Comments