after mirroring plex, ssh service are not running
hi all, i have some productions server that are using SFHA Cluster. i need to mirroring volume to new storage using. (#vxassist -g DGname mirror VOLname alloc="vxdisk1 vxdisk2 . . . . vxdisk13") after 190 minutes ssh service cannot be access. i aborted the process, because im afraid it will affected another service group and make server get panic and then reboot. if there anyone have some resolve plan like we mirroring in subdisk level?1.1KViews0likes1CommentDMP default IOPOLICY is different for different kind of storage
[root@hostname]/> vxdmpadm getattr enclosure EMC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EMC0 MinimumQ Adaptive [root@hostname]/> vxdmpadm getattr enclosure HDS9500-ALUA0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ HDS9500-ALUA0 Round-Robin Single-Active [root@hostname]/> When I look at the DMP iopolicy for different storages I see that the default DMP mechanism is different. How is this default value set and is there any documentation on this. Also is there any recomendations from the storage vendor depending on the storage's host usage and throughput.SolvedLocked out dmp_rel_exclusive_lock on Solaris-9 E6900 hardware
I have a lock contention on my (E6900) solaris-9 server, that brought my server to kneese. Found that "dmp_rel_exclusive_lock" of VxVM was the one that consumes all the server CPU resources. Since I did not know the right solution to tackle this issue, I rebooting server. And the lock contention was cleared out. By the way, I used lockstat to capture the system locks. Anyone encountered such locking problem in VxVM 5.0 MP1? Any thougths... Thanks, RR