I had been successfully running FSS with (thin) 8Tb disk drives on SFCFSHA 6.1 and 6.2.1
I am trying to reproduce the same kind of setup with InfoScale 7.1 and it seems to have issues with 8Tb drives.
Here's the full setup description:
2 * RHEL6.8 hosts with 16gb RAM.
4 LSI virtual adapters, each with 15 drives.
c0* and c1* have 2Tb drives.
c2* and c3* have 8Tb drives.
Both 2tb and 8tb drives are 'exported' and the cluster is stable.
Here's what I noticed.. Creating an FSS DG works on 2tb drives but not on 8tb drives (it used to on 6.1 and 6.2.1):
[root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_2T_00
[root@vcs18 ~]# vxdg list FSS00dg
flags: shared cds
alignment: 8192 (bytes)
cluster-actv-modes: vcs18=sw vcs19=sw
copies: nconfig=default nlog=default
config: seqno=0.1027 permlen=51360 free=51357 templen=2 loglen=4096
config disk ssd_2T_00 copy 1 len=51360 state=clean online
log disk ssd_2T_00 copy 1 len=4096
On the 8Tb drives, it fails with:
[root@vcs18 ~]# vxdg destroy FSS00dg
[root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_8T_00
VxVM vxdg ERROR V-5-1-585 Disk group FSS00dg: cannot create: Record not in disk group
One thing that I noticed is that the 8Tb drives, even though exported, do -not- show up on the remote machine:
[root@vcs18 ~]# vxdisk list|grep _00
ssd_2T_00 auto:cdsdisk - - online exported
ssd_2T_00_1 auto:cdsdisk - - online remote
ssd_8T_00 auto:cdsdisk - - online exported
One other thing to note is that the 'connectivity' seems a bit messed up on the 8Tb drives:
[root@vcs18 ~]# vxdisk list ssd_2T_00|grep conn
[root@vcs18 ~]# vxdisk list ssd_2T_00_1|grep conn
[root@vcs18 ~]# vxdisk list ssd_8T_00|grep conn
connectivity: vcs18 vcs19
That's (IMHO) an error since those 'virtual' drives are local to each of the nodes and the SCSI busses aren't shared
vcs18 and vcs19 are two fully independent VMWare machines.
This looks like a bug to me but since I don't work for a company with a vx software support contrat anymore I cannot report the issue.
Thanks for reading,
IS 7.1 is not supported on 6.8. I had a working cluster on 6.7 recently, the customer upgraded/patched it and the agents stopped working. We then tried 6.2.1 which is "supported" (LLT patch available) but then a bunch of other things stopped working.
Do you have more information on the agents that failed? Is there a BZ with RedHat about the regression?
I admit most of my 7.1 and 6.2.1 setups on RHEL6.8 -DO- work great. :)
As I said before, it's only large disks (Don't know how large but I tried both 2tb and 8tb disks) that do not
work under Infoscale 7.1 on RHEL6.8 for a shared FSS DG.
If I stay with SFCFSHA 6.2.1, 8Tb disks work fine on RHEL6.8.
Sounds like an issue in IS 7.1.
I might try the RHEL6.7 kernel if you think that may help.
For the record, the same FSS setup with 2Tb disks works perfectly under IS 7.1 on RHEL6.8.
It's only large disks (Don't know -how- large as I tried both 2tb and 8tb disks) that fail to allow creation
of a shared FSS DG on them.
DG creation works fine if I don't use a shared DG ('-s' ).
It was the coodination point and Oracle agents. Support told me that 6.8 isn't supported on 7.1. We then went down to 6.2.1 and it was still a disaster. Power path was also found to not be supported on 6.8 so in the end we just reinstalled 6.7
That's all I got :)
This just came out: https://sort.veritas.com/patch/detail/11652
Seems IS 7.1 is now supported/supportable on RHEL6.8
PowerPath on Linux was often a disaster in itself, weren't the features you needed supported by DMP?
Nice, thanks for letting me know. The client insisted on using PP :o, they also insist on running their service groups frozen as well. Need I say more?