Not Even Superman Can Prevent Cloud Outages
Up in the sky, look: It’s a bird. It’s a plane. It’s Superman! Wait… what’s he doing? Is he saving the world from an evil foe? Is he fortifying the Cloud to prevent another seriously damaging business outage? Not even Superman can do that. Read on to learn more…1.6KViews0likes0Commentsupgrade SFHA
Hi All, Regarding the need of configuring DR with VVR we have purchase the new licence Infoscale 7.1 Can we upgrade from SFHA 6.1 to Veritas Infoscale Enterprise 7.1? if not what are the best method to preserve data when installing the new product. Thanks in advance.1.8KViews0likes0CommentsWhat volume name does VVR use for replication?
I'm about to implement volume replication with InfoScale 7.0 on windows. Currently, InfoScale is only installed on the new DR server and isn't on the production server yet. When I look at the VEA console, select my disk group and click on the volumes tab, I see all the volumes I have set up. But there is a Volume column and a Volume Name column. For example, Volume MailArchive01 is known with the internal volume name of Volume2. For various reasons, the volumes are being created in advance on our DR server and I understand that VVR will replicate volume-to-volume. But will it be replicating MailArchive01 to MailArchive01 or Volume2 to Volume2? The internal volume names seem to be arbitrarily assigned by the software, so when I install in production, there is no guarantee that they are all going to be mapped to the same internal names. For example, it could be MailArchive01:Volume2 on DR and MailArchive01:Volume3 on Prod. I see that there is an option to "change internal volume name"in the VEA console but don't know if this is necessary.Solved1.6KViews0likes1CommentStorage Management for Production Ready Docker
Dear Docker User, Developers and System Administrators around the planet eagerly look forward to connect and share knowledge on what is needed to efficiently build, ship and run applications. The recent edition of Dockercon EU 2015 at Barcelona was the place to do just that and also saw Docker announcing Docker Universal Control Plane and Docker Swarm scaling to 50,000 containers in half a second. 2/3 rd of the companies who evaluate Docker adopt it as per the Article and based on my interaction with Docker Community. Most of the hands on Docker users believe Storage is a key challenge to be solved for Stateful applications in Production ready Docker. ( Technology Exhibition ) Kubernetes is a widely used container management platform and is deployed and used by most of the customers whom I spoke to. However, Docker users would look at Swarm as an alternative considering they need not learn kubectl CLI. The demo which showed Docker Swarm scaling to 50,000 containers on 1000 machines was impressive. It is a testimony to the scaling potential of Docker and gave a tip of the iceberg of what is the future for clustering containers. Virtuozzo, a leading player showcased their solution on the Container migration use-case in the Black Belt track. The talk highlighted the complexities of Container Migration and how the challenges are being solved today which was very intriguing from the technical standpoint. It also provides new opportunities for Storage Management to evolve. Developers spend a majority of their time waiting for tests to run and tracking down and reproducing bugs. Some of the interesting use-cases that got discussed were how one can speed up integration tests by caching database state as a commit, and rolling back to it rather than re-creating database from scratch every time one runs the tests. If the developer finds a bug which only manifests when the database is in a certain state, it is difficult to recreate the state. So, the use-case would be, how to save the database state for later debugging? It’s like creating bookmarks for development/production database. One of the important topic in the conference was on how to orchestrate Persistent Storage Services for Docker. Veritas recently launched its Infoscale Plugin along with a whitepaper to guarantee Persistence for Docker Applications and also help in snapshotting or migrating data volumes to anywhere in the Datacenter. Please peruse the article to understand the value proposition. This plugin on Docker 1.8+ will make life easier for Developer and Operation admins to run distributed applications with stateful databases on persistent storage. The latest version of Dockerhandles storage via volume drivers and one of the goals for the feature is support for third-party storage hardware with custom behaviors in hyper converged environments. Better Storage Integration with Ceph, Glusterfs, ZFS, Veritas Infoscale and Flocker were some of the requests coming from Docker users. ( Meetup with Solomon Hykes, CTO Docker Inc ) This brings us to an interesting transition point on what would be the next set of use-cases to solve for Docker running on Production from the Storage perspective using Agile Development Methodologies. The vision is to have REST API to invoke advanced storage management features like Scalability, Quality of Service, Snapshots, Migration and Disaster Recovery directly from any Container Management Platform. It will also involve enabling Persistent Storage for a wide variety of heterogeneous storage with relevant customization for production environment from Veritas, the Software Defined Storage company. In a Datacenter with multiple containers running, it would be key to easily identify the storage set associated to a container for each of the containers running on a specific host. From the manageability standpoint, I envisage comprehensive Integration with Docker Universal Control Plane and Veritas Infoscale Operations Manager would be valuable to make any relevant technology on Docker widely usable. In summary, the architects who are considering to deploy Docker in production want Super Easy, highly resilient Storage Management, 100% Continuous Availability which would mean Zero Downtime and want applications to have the intelligence of self-healing. with regards, Rajagopal Vaideeswaran Product Owner Information Availability - Storage Email1.1KViews2likes1Commentvvr replication not ok .....need help to fix it
Hello, Replication is not happening , however RVG is enabled /active and Rlink is conect/active .... below are the details :--> -> vxprint -Pl Disk group: ossdg Rlink: to_sec_site_ossrvg info: timeout=500 packet_size=1452 rid=0.113591 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=ossrvg remote_host=sec_site-ossrvg IP_addr=10.52.214.68 port=4145 remote_dg=ossdg remote_dg_dgid=1444153050.96.secmas1o remote_rvg_version=30 remote_rlink=to_prisite_ossrvg remote_rlink_rid=0.1263 local_host=prisite-ossrvg IP_addr=10.52.214.67 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected asynchronous dcm_logging -> vradmin -g ossdg repstatus ossrvg Replicated Data Set: ossrvg Primary: Host name: prisite-ossrvg RVG name: ossrvg DG name: ossdg RVG state: enabled for I/O Data volumes: 20 VSets: 0 SRL name: oss_srl_vol SRL size: 300.00 G Total secondaries: 1 Secondary: Host name: sec_site-ossrvg RVG name: ossrvg DG name: ossdg Data status: consistent, behind Replication status: logging to DCM (needs dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 61101728 Kbytes) (SRL protection logging) Timestamp Information: N/A -> vxrvg -g ossdg cplist ossrvg Name MBytes % Log Started/Completed ---- ------ ------ ----------------- point1 <Checkpoint overflowed> -> vxrlink -g ossdg status to_sec_site_ossrvg DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61101728 Kbytes (19%) of the Data Volume(s). -> vxrlink -g ossdg stats to_sec_site_ossrvg Messages Errors Flow Control -------- ------ ------------ # Blocks RT(msec) Timeout Stream Memory Delays NW Bytes NW Delay Timeout 1 0 0 0 0 0 0 100000 1 10 primas1o{root} #: vxprint -PV Disk group: ossdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl to_sec_site_ossrvg ossrvg CONNECT - - ACTIVE - - rv ossrvg - ENABLED - - ACTIVE - - primas1o{root} #: vxrlink -g ossdg -i 5 status to_sec_site_ossrvg Mon Nov 30 17:24:01 2015 VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443008 Kbytes (19%) of the Data Volume(s). VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443008 Kbytes (19%) of the Data Volume(s). VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443040 Kbytes (19%) of the Data Volume(s). VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443232 Kbytes (19%) of the Data Volume(s). VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443232 Kbytes (19%) of the Data Volume(s). VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink to_sec_site_ossrvg. DCM contains 61443264 Kbytes (19%) of the Data Volume(s). ^Cprimas1o{root} #: ===================================================================================================================== Secondary ======================================================= -> vxprint -Pl Disk group: ossdg Rlink: to_prisite_ossrvg info: timeout=500 packet_size=1452 rid=0.1263 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=ossrvg remote_host=prisite-ossrvg IP_addr=10.52.214.67 port=4145 remote_dg=ossdg remote_dg_dgid=1327928106.32.secmas1o remote_rvg_version=30 remote_rlink=to_sec_site_ossrvg remote_rlink_rid=0.113591 local_host=sec_site-ossrvg IP_addr=10.52.214.68 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected -> vradmin -g ossdg repstatus ossrvg Replicated Data Set: ossrvg Primary: Host name: prisite-ossrvg RVG name: ossrvg DG name: ossdg RVG state: enabled for I/O Data volumes: 20 VSets: 0 SRL name: oss_srl_vol SRL size: 300.00 G Total secondaries: 1 Secondary: Host name: sec_site-ossrvg RVG name: ossrvg DG name: ossdg Data status: consistent, behind Replication status: logging to DCM (needs dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 61442560 Kbytes) (SRL protection logging) Timestamp Information: N/A -> vxrvg -g ossdg cplist ossrvg The cplist command can only be used on a primary rvg -> vxrlink -g ossdg status to_prisite_ossrvg The status command can only be used on a primary rlink -> vxrlink -g ossdg stats to_prisite_ossrvg Messages Errors Flow Control -------- ------ ------------ # Blocks RT(msec) Timeout Stream Memory Delays NW Bytes NW Delay Timeout 334089805 0 0 1927960 0 0 0 100000 1 10 Also, I ran the below command to check rlink bandwidth usgae and it is 0% utilised Cprimas1o{root} #: vrstat -R Mon Nov 30 17:43:49 2015 Replicated Data Set ossrvg: Data Status: sec_site-ossrvg: DCM contains 61543520 Kbytes. Network Statistics: Messages Errors Flow Control -------- ------ ------------ # Blocks RT(msec) Timeout Stream Memory Delays NW Bytes NW Delay Timeout primas1o 1 0 0 0 0 0 0 100000 1 10 Mon Nov 30 17:43:59 2015 Replicated Data Set ossrvg: Data Status: sec_site-ossrvg: DCM contains 61543552 Kbytes. Network Statistics: Messages Errors Flow Control -------- ------ ------------ # Blocks RT(msec) Timeout Stream Memory Delays NW Bytes NW Delay Timeout primas1o 0 0 0 0 0 0 0 100000 1 10 Bandwidth Utilization 0.00 Kbps. I am wondering if rlink in connect/active state then why its not replicating ? Regards SSolved3.3KViews0likes6CommentsHow to re-create RVG
I have mistankly deleted 2 disks that were used as cachevol for secondary RVG and datain secondary become unaccessible, for that i went through deletingthe DG and succeeded. now i am trying to create RVG group again, for that i should delete both RVG in main and DR which seems impossible, even dissociate SRL volume from primary RVG seems impossible any hint please? BR,Solved2.2KViews0likes6CommentsWhen will SRL be played for initial sync using backup/restore
Understood all the points in https://www-secure.symantec.com/connect/forums/clustering-one-physical-and-one-virtual-node-gcobut the resync using backup/restore is still confusing. """"Whenusing checkpoints, you take backup of the data on the Primary and physicallyship the backup media to the Secondary location, and restore the backup on theSecondary. When you start the backup, mark the starting point, by using thecheckstart operation on the Primary. When you end the backup, mark the endingpoint by using the checkend operation on the Primary. While the backup andrestore are going on, updates are written to the Replicator Log volume.To bring the Secondary data up-to-date, restore the block-level backup. After therestore is complete, start replication to the Secondary with checkpoint using thesame checkpoint name that you had specified for the checkstart operation on thePrimary. The Secondary can be brought up-to-date only if the updates are still presentin the Replicator Log volume. Using checkpoints is a multi-step process andtherefore, needs to be done very carefully."""""" When will the replicator log will be played? Also once the logs have been committed will the pair notify us about those logs being played?Solved1.4KViews0likes2Commentsneed to fix vvr replication
Earlier node 1 was primary and node2 was secondary Node1 crashed and node 2 become primary. Now, we have brought node1 up as below and below is the status of rlink Please provide fix root@node1:# vradmin -g ossdg -l repstatus ossrvg Replicated Data Set: ossrvg Primary: Host name: punjab_core-ossrvg RVG name: ossrvg DG name: ossdg RVG state: enabled for I/O Data volumes: 24 VSets: 0 SRL name: oss_srl_vol SRL size: 400.00 G Total secondaries: 1 Config Errors: delhi_core-ossrvg: Primary-Primary configuration root@node1:# vxprint -PV Disk group: ossdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl to_delhi_core_ossrvg ossrvg ENABLED - - PAUSE - - rv ossrvg - ENABLED - - ACTIVE - - root@node1:/ericsson/hagcs/etc# vxprint -Pl Disk group: ossdg Rlink: to_delhi_core_ossrvg info: timeout=500 packet_size=1452 rid=0.1329 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=PAUSE synchronous=off latencyprot=off srlprot=override assoc: rvg=ossrvg remote_host=delhi_core-ossrvg IP_addr=10.161.21.197 port=4145 remote_dg=ossdg remote_dg_dgid=1374151282.15.node1 remote_rvg_version=30 remote_rlink=to_punjab_core_ossrvg remote_rlink_rid=0.7142 local_host=punjab_core-ossrvg IP_addr=10.161.54.197 port=4145 protocol: UDP/IP flags: write enabled attached primary_paused consistent disconnected asynchronous root@node1:/ericsson/hagcs/etc# ========================================= root@node2:/ericsson/hagcs/etc# vradmin -g ossdg -l repstatus ossrvg Replicated Data Set: ossrvg Primary: Host name: delhi_core-ossrvg RVG name: ossrvg DG name: ossdg RVG state: enabled for I/O Data volumes: 24 VSets: 0 SRL name: oss_srl_vol SRL size: 400.00 G Total secondaries: 1 Config Errors: punjab_core-ossrvg: Primary-Primary configuration root@node2:/ericsson/hagcs/etc# vxprint -PV Disk group: ossdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl to_punjab_core_ossrvg ossrvg ENABLED - - ACTIVE - - rv ossrvg - ENABLED - - ACTIVE - - root@node2:/ericsson/hagcs/etc# vxprint -Pl Disk group: ossdg Rlink: to_punjab_core_ossrvg info: timeout=500 packet_size=1452 rid=0.7142 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=ossrvg remote_host=punjab_core-ossrvg IP_addr=10.161.54.197 port=4145 remote_dg=ossdg remote_dg_dgid=1384451046.29.node2 remote_rvg_version=30 remote_rlink=to_delhi_core_ossrvg remote_rlink_rid=0.1329 local_host=delhi_core-ossrvg IP_addr=10.161.21.197 port=4145 protocol: UDP/IP checkpoint: point1 flags: write enabled attached consistent disconnected asynchronous dcm_logging resync_started root@node2:/ericsson/hagcs/etc# Please provide me fixSolved2.8KViews1like6CommentsExploring the Solid-State Array Landscape with VERITAS
Anyone paying attention to storage trends in the datacenter will be acutely aware of the increasing presence of all-flash arrays (AFA). As Storage and IT administrators become more comfortable with the unique characteristics of flash when compared to traditional spinning disk, AFA’s are being targeted at more applications and workloads.1.2KViews0likes0Comments