11-14-2011 10:45 AM
Hi Guys,
I am facing some CHS geometry related issue while migrating data from one lun to another under Veritas DG.
the below output shows the difference in geometry after initializing the disk. The OS sees this disk without any issues but when veritas sees them, there seems to be some issues.
The backend lun is from Hitachi USP-V and AMS(old lun)
B60Aa_apps_dg0X the newly added ones
Solved! Go to Solution.
11-21-2011 03:33 AM
Hi Guys
Just to let you know , what we did to get the solution for this problem.
We changed the priv len from 32MB to 18MB to add some more space in public len and things worked fine.
@ Tony : The tools shows the Lun in blocks and it exactly the same size as the one which we want to migrate from the old frame. Under OS , it is fine but under Veritas initialization, the block size changes and differs then the OS.
@ Yasuhisa, Thats what i think we will have to do to avoid changing the priv len for bigger disk groups.
11-14-2011 07:06 PM
newly added disks are correclty configured using whole space of LU.
CHS depend on physical disk structure, and is out of control of VxVM.
Only issue I can find is that capacity of newly added disks is bit smaller than older disks although you mean to make LUs of same size.
11-14-2011 09:25 PM
Were old disks perhaps initialized with an older SF version?
The Private region has been increased in recent SF version to allow for more diskgroup components.
Please post output of one of each <old-disk> and <new-disk>:
vxdisk list <old-disk>
vxdisk list <new-disk>
11-15-2011 12:10 AM
Dear Yasuhisa,
Thanks for replying !!
Yes , thats exactly the problem when it comes to capacity even if we have allocated the same blocks from LU side. The problem is that if we increase the LUN to which we want to migrate . incase we need to fallback, we won't be able to , since source LUN would be smaller.
Do you see any way we can sort this situation ?
Regards
Rahul
11-15-2011 12:13 AM
Dear Marianne,
Thanks for the reply.
I will try and get this information asap since I don't have access to the server right now. Meanwhile is there any suggestion ?
Regards
Rahul
11-15-2011 01:51 AM
Dear Marianne,
Here is the info you required
old disk
# vxdisk list ams_wms0_100
Device: ams_wms0_100
devicetag: ams_wms0_100
type: auto
hostid: D01TODB60Aa
disk: name=DISK_F0064 id=1269689604.55.D01TODB60Aa
group: name=B60Aa_apps_dg id=1269692264.124.D01TODB60Aa
info: format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags: online ready private autoconfig autoimport imported
pubpaths: block=/dev/vx/dmp/ams_wms0_100s3 char=/dev/vx/rdmp/ams_wms0_100s3
guid: -
udid: HITACHI%5FDF600F%5F77014088%5F0064
site: -
version: 3.1
iosize: min=512 (bytes) max=1024 (blocks)
public: slice=3 offset=65792 len=104689408 disk_offset=0
private: slice=3 offset=256 len=65536 disk_offset=0
update: time=1321293381 seqno=0.37
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=51360
logs: count=1 len=4096
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-051423[051168]: copy=01 offset=000192 enabled
log priv 051424-055519[004096]: copy=01 offset=000000 disabled
lockrgn priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths: 2
sdbr state=enabled type=primary
sdai state=enabled type=secondary
new disk
[root@D01TODB60Aa ~]# vxdisk list hitachi_usp-v0_0c1a
Device: hitachi_usp-v0_0c1a
devicetag: hitachi_usp-v0_0c1a
type: auto
hostid: D01TODB60Aa
disk: name=B60Aa_apps_dg01 id=1321288154.81.D01TODB60Aa
group: name=B60Aa_apps_dg id=1269692264.124.D01TODB60Aa
info: format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags: online ready private autoconfig autoimport imported thinrclm
pubpaths: block=/dev/vx/dmp/hitachi_usp-v0_0c1as3 char=/dev/vx/rdmp/hitachi_usp-v0_0c1as3
guid: {ca77779a-0edd-11e1-8abf-bcfa7a713f5e}
udid: HITACHI%5FOPEN-V%5F13236%5F0C1A
site: -
version: 3.1
iosize: min=512 (bytes) max=1024 (blocks)
public: slice=3 offset=65792 len=104661936 disk_offset=0
private: slice=3 offset=256 len=65536 disk_offset=0
update: time=1321293381 seqno=0.23
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=51360
logs: count=1 len=4096
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-051423[051168]: copy=01 offset=000192 enabled
log priv 051424-055519[004096]: copy=01 offset=000000 enabled
lockrgn priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths: 2
sdaj state=enabled
sda state=enabled
rpm -qa | grep VRTSvxvm
VRTSvxvm-5.1.100.000-SP1_RHEL5
# rpm -qa | grep VRTS
VRTSlvmconv-5.1.100.000-SP1_RHEL5
VRTSvxvm-5.1.100.000-SP1_RHEL5
VRTSodm-5.1.100.000-SP1_GA_RHEL5
VRTSperl-5.10.0.7-RHEL5.3
VRTSatServer-5.0.32.0-0
VRTSaslapm-5.1.100.000-SP1_RHEL5
VRTSvxfs-5.1.100.000-SP1_GA_RHEL5
VRTSvlic-3.02.51.010-0
VRTSob-3.4.289-0
VRTSfssdk-5.1.100.000-SP1_GA_RHEL5
VRTSicsco-1.3.28.0-0
VRTSpbx-1.3.28.0-0
VRTSspt-5.5.000.005-GA
VRTSdbed-5.1.100.000-SP1_RHEL5
VRTSatClient-5.0.32.0-0
VRTSsfmh-3.1.429.0-0
11-16-2011 04:43 AM
Does the tool that created the LUN show the usuable length of the new LUN in sectors ? and does this match the label on the disk ?
cheers
tony
11-16-2011 10:17 PM
By "x extra functionality (experts only)" menu in fdisk, you can manipulate CHS at OS level.
But this is risky operation. I recommend you following options.
11-21-2011 03:33 AM
Hi Guys
Just to let you know , what we did to get the solution for this problem.
We changed the priv len from 32MB to 18MB to add some more space in public len and things worked fine.
@ Tony : The tools shows the Lun in blocks and it exactly the same size as the one which we want to migrate from the old frame. Under OS , it is fine but under Veritas initialization, the block size changes and differs then the OS.
@ Yasuhisa, Thats what i think we will have to do to avoid changing the priv len for bigger disk groups.