cancel
Showing results for 
Search instead for 
Did you mean: 

Does VxVM work with a "disk snapshot which is a block device"

sunillp
Level 3

Hi,

We have a dg with a single physical disk that has "cdsdisk" format. I created a snapshot device (block device) SNAP_BLOCK_DEV for this physical disk. This snapshot device contains same data as on the physcial disk (there is one-to-one mapping).

Now, I want to show this snapshot device to VxVM i.e. want VxVM to work with thin snapshot device (block device). Is there a way? I tried following approach:

- Added SNAP_BLOCK_DEV as a "foreigndevice". After this I see "SNAP_BLOCK_DEV" in "vxdisk list" but it shows up as "Simple" disk and in "invalid" state. I want my SNAP_BLOCK_DEV to be seen by VxVM as a "cdsdisk" and "online valid" state and my data still intact. Is there a way?

- Is there a way to tell VxVM to look up/scan for devices in a specific directory (other than foreigndevice mechanism)? Does VxVM scan only physical devices? or it can also scan and work with block devices?

Thanks,
Sunil
1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Please go to this URL:

http://www.symantec.com/business/support/index?page=landing&key=15107

on the right-hand side of the screen you will find "Contacting Support".

There are various options - email, log a call online, etc.

Under Contact Technical Support you can select your country - a phone number will be displayed.

 

Please give us feedback. I have not been able to find any information about your snapshot method -not sure if it's supported.

View solution in original post

21 REPLIES 21

Gaurav_S
Moderator
Moderator
   VIP    Certified
I am in a little doubt that veritas can see the private region of your snap device...  what does prtvtoc on the device show ?

even if it shows a simple disk, can u see the private region on that device ?

what does "vxdisk list <snap>" show ?

Once I see this then only can comment further..


Gaurav

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Please share more info about your hardware and snapshot? Also your VxVM and O/S version.

The VxVM Admin Guide describes how hardware snapshots can be imported (as from 5.0):

p. 243:

Handling cloned disks with duplicated identifiers
A disk may be copied by creating a hardware snapshot (such as an EMC BCV™ or Hitachi ShadowCopy™) or clone, by using dd or a similar command to replicate the disk, or by building a new LUN from the space that was previously used by a deleted LUN. To avoid the duplicate disk ID condition, the default action of VxVM is to prevent such duplicated disks from being imported.

Advanced disk arrays provide hardware tools that you can use to create clones of existing disks outside the control of VxVM. For example, these disks may have been created as hardware snapshots or mirrors of existing disks in a disk group.
As a result, the VxVM private region is also duplicated on the cloned disk. When the disk group containing the original disk is subsequently imported,VxVM detects multiple disks that have the same disk identifier that is defined in the private region. In releases prior to 5.0, if VxVM could not determine which disk was the original, it would not import such disks into the disk group. The duplicated disks would have to be re-initialized before they could be imported.

From release 5.0, a unique disk identifier (UDID) is added to the disk’s private region when the disk is initialized or when the disk is imported into a disk group (if this identifier does not already exist). Whenever a disk is brought online, the current UDID value that is known to the Device Discovery Layer (DDL) is compared with the UDID that is set in the disk’s private region. If the UDID values do not match, the udid_mismatch flag is set on the disk. This flag can be viewed with the vxdisk list command. This allows a LUN snapshot to be imported on the same host as the original LUN. It also allows multiple snapshots of the same LUN to be simultaneously imported on a single server, which can be useful for off-host backup and processing.

A new set of vxdisk and vxdg operations are provided to handle such disks; either by writing the DDL value of the UDID to a disk’s private region, or by tagging a disk and specifying that it is a cloned disk to the vxdg import operation.

...........

sunillp
Level 3

Hi Gaurav/Marianne,

Please note my snap device is a block device and not a physical disk, so it does not support scsi commands, only block level commands. Also the real physical devices are excluded from VxVM, so VxVM does not see them. The cloned/snapshot device is only shown to the VxVM, so I think there should not be an issue like udid_mismatch. We are using VxVM 5.1 and Solaris 10 sparc (Solaris 10 10/09 s10s_u8wos_08a SPARC).

 

Gaurav, earlier prtvtoc for physical disk and our cloned snapshot device were not showing same output, but we changed/updated our driver and now prtvtoc for both is showing same output.

Physical disk:

-bash-3.00# prtvtoc /dev/rdsk/c2t50060E80104ABD40d5s2
* /dev/rdsk/c2t50060E80104ABD40d5s2 partition map
*
* Dimensions:
*     512 bytes/sector
*     768 sectors/track
*      50 tracks/cylinder
*   38400 sectors/cylinder
*      54 cylinders
*      52 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       2      5    01          0   1996800   1996799
       7     15    01          0   1996800   1996799
 

Snapshot device:

-bash-3.00# prtvtoc /dev/vs/rdsk/vsnap3261in
* /dev/vs/rdsk/vsnap3261in (volume "vsnap32") partition map
*
* Dimensions:
*     512 bytes/sector
*     768 sectors/track
*      50 tracks/cylinder
*   38400 sectors/cylinder
*      54 cylinders
*      52 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       2      5    01          0   1996800   1996799
       7     15    01          0   1996800   1996799
 

Output of "/etc/vx/bin/vxprtvtoc" is also same for both physical disk and snapshot device:

#THE PARTITIONING OF /dev/vs/rdsk/vsnap3261in IS AS FOLLOWS :
#SLICE     TAG  FLAGS    START     SIZE
 0         0x0  0x000        0        0
 1         0x0  0x000        0        0
 2         0x5  0x201                    0              1996800
 3         0x0  0x000        0        0
 4         0x0  0x000        0        0
 5         0x0  0x000        0        0
 6         0x0  0x000        0        0
 7         0xf  0x201                    0              1996800
 

However, vxdisk list for snapshot device is not showing the private and public regions:

-bash-3.00# vxdisk list

vsnap3261in  simple          -            -            online invalid

vsnap3261in added as foreign device using following cmd:

# vxddladm addforeign blockpath=/dev/vs/dsk/vsnap3261in charpath=/dev/vs/rdsk/vsnap3261in

-bash-3.00# vxdisk list vsnap3261in
Device:    vsnap3261in
devicetag: vsnap3261in
type:      simple
flags:     online ready private foreign invalid
pubpaths:  block=/dev/vs/dsk/vsnap3261in char=/dev/vs/rdsk/vsnap3261in
guid:      -
udid:      -
site:      -
 

vxdisk list for physical disk:

# vxdisk list c2t50060E80104ABD40d5s2
Device:    df8000_43
devicetag: df8000_43
type:      auto
clusterid: solaris-cluster
disk:      name= id=1282292789.20.solaris162.qa.us
group:     name=stipedg id=1284484457.43.solaris162.qa.us
info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags:     online ready private autoconfig shared autoimport
pubpaths:  block=/dev/vx/dmp/df8000_43s2 char=/dev/vx/rdmp/df8000_43s2
guid:      {a13ff454-ac34-11df-818f-0003ba3d3349}
udid:      HITACHI%5FDF600F%5F83040988%5F002B
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=2 offset=65792 len=1931008 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
update:    time=1284484460 seqno=0.55
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=48144
logs:      count=1 len=7296
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
 log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
 lockrgn  priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths:   2
c2t50060E80104ABD40d5s2 state=enabled
c2t50060E80104ABD42d5s2 state=enabled

 

Please tell us how to make VxVM see the private region of our snapshot device. Do we need to support additional IOCTLs in our snapshot driver. Can using ASL for our snapshot device (foreign device) help? This is very important for us. Your help is urgentlly needed. Can we communicate over gtalk? My gtalk id is "sunillp", please ping me whenever u r online.

sunillp
Level 3

Hi, any updates?

Gaurav_S
Moderator
Moderator
   VIP    Certified

I somehow missed this post completely....  :(

 

Few things which are bothering..

-- for a cdsdisk, pubslice & privslice should be slice 2 ....

format=cdsdisk,privoffset=256,pubslice=2,privslice=2

but prtvtoc shows for slice 7 allocated space even for physical disk & for snapshot device...  so I would like to know where this physical disk is coming from...

-- "Simple" type disk is old type for format usually was common in 3.x versions.... In this format one particular slice was used instead of whole disk, thats what VxVM is recognizing as simple disk (snapshot device)... but what I would have thought is, even that's how physical device should be recognized (seeing prtvtoc) ... not sure how is it getting sensed as cdsdisk...

 

-- Did u ever ran any "vxdctl enable" to rescan disks ?

-- I presume this won't work but can u try seeing if you can dump the private region contents

# /etc/vx/diag.d/vxprivutil dumpconfig /dev/vs/rdsk/vsnap3261in  (s2 or s7)

 

If we see vxvm 5.1 admin guide

 

After adding entries for the foreign devices, use either the vxdisk scandisks or
the vxdctl enable command to discover the devices as simple disks. These disks
then behave in the same way as autoconfigured disks. 

 

so I understand from above that device would anyways be detected as simple device only...

Again as per admin guide:

 

■ The I/O Fencing and Cluster File System features are not supported for foreign
devices.

If a suitable ASL is available and installed for an array, these limitations are
removed.


See “Third-party driver coexistence” on page 90.

 

So installing ASL would be good..

Also curious to know.. did u anytime set any tpdmode ?  native or pseudo ? using vxddladm command...

 

Let me know..

I understand u need quick answer however issue seems little different...

 

Gaurav

sunillp
Level 3

Hi Gaurav,

Yes, we did tried both "vxdctl enable" and "vxdisk scandisks". Only after this we were able to see our "vsnap device" in "vxdisk list" output as "simple" and "online invalid".

# /etc/vx/diag.d/vxprivutil dumpconfig /dev/vs/rdsk/vsnap3261in

does work and shows the correct/desired output. I am attaching the output below (this is for a different snapshot). My question is:

- Can VxVM work with a block device that does not support SCSI commands, supports only block level commands and that is a snapshot of physical disk. Has all the data that is there on physical disk? I remember VxVM or rather VxDMP/VxDDL issues some scsi commands to the disk in order to get disk UDID, which is obtained from scsi inquiry page 0x83, it gets vendor name/product id from page 0x0, and product/lun serial number from page 0x80. In our case these commands will not work. Is this an issue?

- For ASL again I guess we need to support scsi commands i.e. respond to SCSI inquiry commands sent by DDL (VxVM or VxDMP) which we do not do today. Since we dont want to work with DMP/DDL and just want VxVM to recognise our snapshot device, is it possible to get this working without supporting SCSI commands?

 

[root@imits074 ~]# /etc/vx/diag.d/vxprivutil dumpconfig /dev/vs/cli6in
#Config copy 01

#Header nblocks=205440 blksize=128 hdrsize=512
#flags=0x100 (CLEAN)
#version: 4/15
#dgname: sharedg  dgid: 1284617401.17.imit2s002
#config: tid=0.1057 nstpool=0 nrvg=0 nrlink=0 ncache=0 nvol=2 nplex=2 nsd=4 ndm=2 nda=0 nexp=0
#pending: tid=0.1057 nstpool=0 nrvg=0 nrlink=0 ncache=0 nvol=2 nplex=2 nsd=4 ndm=2 nda=0 nexp=0
#
#Block    4: flag=0    ref=12   offset=0    frag_size=104
#Block    5: flag=0    ref=7    offset=0    frag_size=87
#Block    6: flag=0    ref=12   offset=104  frag_size=4
#Block    8: flag=0    ref=13   offset=0    frag_size=104
#Block    9: flag=0    ref=13   offset=104  frag_size=4
#Block   11: flag=0    ref=16   offset=0    frag_size=104
#Block   12: flag=0    ref=16   offset=104  frag_size=12
#Block   14: flag=0    ref=19   offset=0    frag_size=72
#Block   16: flag=0    ref=20   offset=0    frag_size=104
#Block   17: flag=0    ref=20   offset=104  frag_size=12
#Block   19: flag=0    ref=21   offset=0    frag_size=72
#Block   24: flag=0    ref=17   offset=0    frag_size=71
#Block   26: flag=0    ref=18   offset=0    frag_size=73
#Block   44: flag=0    ref=39   offset=0    frag_size=71
#Block   46: flag=0    ref=40   offset=0    frag_size=73
#
#Record    7: type=0x7f115 flags=0    gen_flags=0x4  size=87
#Blocks: 5
dg   sharedg
  comment="
  putil0="
  putil1="
  putil2="
  dgid=1284617401.17.imit2s002
  rid=0.1025
  update_tid=0.1028
  nconfig=default
  nlog=default
  base_minor=56000
  version=150
  cds=on
  alignment=16
  last_platform=LINUX
  maxdev=32767
#Record   12: type=0x1b3114 flags=0    gen_flags=0x4  size=108
#Blocks: 4 6
dm   sde
  comment="
  putil0="
  putil1="
  putil2="
  diskid=1284617392.13.imit2s002
  last_diskid=1284617392.13.imit2s002
  last_da_name=sdh
  rid=0.1026
  guid={0a92a21a-c159-11df-9e92-fd91ea1b233e}
  allocator_reserved=off
  removed=off
  detached=off
  spare=off
  failing=off
  missing=off
  update_tid=0.1056
  last_da_dev=2160
  last_disk_offset=65792
  ssbid=0.0
  last_mediatype=hdd
#Record   13: type=0x1b3114 flags=0    gen_flags=0x4  size=108
#Blocks: 8 9
dm   sdf
  comment="
  putil0="
  putil1="
  putil2="
  diskid=1284617397.15.imit2s002
  last_diskid=1284617397.15.imit2s002
  last_da_name=sdi
  rid=0.1027
  guid={0a92ab7a-c159-11df-ab65-cc2cb73fa471}
  allocator_reserved=off
  removed=off
  detached=off
  spare=off
  failing=off
  missing=off
  update_tid=0.1056
  last_da_dev=2176
  last_disk_offset=65792
  ssbid=0.0
  last_mediatype=hdd
#Record   16: type=0x5108121 flags=0    gen_flags=0x4  size=116
#Blocks: 11 12
vol  vol01
  use_type=fsgen
  fstype="
  comment="
  putil0="
  putil1="
  putil2="
  state="ACTIVE
  writeback=on
  writecopy=off
  specify_writecopy=off
  pl_num=1
  start_opts="
  read_pol=SELECT
  minor=56000
  user=root
  group=root
  mode=0600
  log_type=REGION
  len=10485760
  log_len=0
  update_tid=0.1057
  rid=0.1029
  detach_tid=0.0
  active=off
  forceminor=off
  badlog=off
  recover_checkpoint=16
  sd_num=0
  sdnum=0
  kdetach=off
  storage=off
  readonly=off
  layered=off
  apprecover=off
  recover_seqno=0
  recov_id=0
  primary_datavol=
  vvr_tag=0
  morph=off
  guid={437e40b8-c15c-11df-b912-c6d52045e0ae}
  inst_invalid=off
  incomplete=off
  instant=off
  restore=off
  snap_after_restore=off
  iscachevol=off
  oldlog=off
  nostart=off
  norecov=off
  logmap_align=0
  logmap_len=0
  inst_src_guid={00000000-0000-0000-0000-000000000000}
  cascaded=off
#Record   17: type=0x20123 flags=0    gen_flags=0x4  size=71
#Blocks: 24
sd   sde-01
  comment="
  putil0="
  putil1="
  putil2="
  dm_offset=0
  pl_offset=0
  len=5242880
  update_tid=0.1037
  rid=0.1034
  guid={437e54b8-c15c-11df-9e82-bf73fd0b57fd}
  plex_rid=0.1032
  dm_rid=0.1026
  minor=0
  detach_tid=0.0
  column=0
  mkdevice=off
  subvolume=off
  subcache=off
  stale=off
  kdetach=off
  relocate=off
  sd_name=
  uber_name=
  tentmv_src=off
  tentmv_tgt=off
  tentmv_pnd=off
  reclaim_pnd=off
  reclaim_done=off
#Record   18: type=0x20923 flags=0    gen_flags=0x4  size=73
#Blocks: 26
sd   sdf-01
  comment="
  putil0="
  putil1="
  putil2="
  dm_offset=0
  pl_offset=0
  len=5242880
  update_tid=0.1037
  rid=0.1036
  guid={4382c9f8-c15c-11df-8e08-0d7a5f2a733b}
  plex_rid=0.1032
  dm_rid=0.1027
  minor=0
  detach_tid=0.0
  column=1
  mkdevice=off
  subvolume=off
  subcache=off
  stale=off
  kdetach=off
  relocate=off
  sd_name=
  uber_name=
  tentmv_src=off
  tentmv_tgt=off
  tentmv_pnd=off
  reclaim_pnd=off
  reclaim_done=off
#Record   19: type=0x5022 flags=0    gen_flags=0x4  size=72
#Blocks: 14
plex vol01-01
  comment="
  putil0="
  putil1="
  putil2="
  layout=STRIPE
  st_width=128
  sd_num=2
  state="ACTIVE
  log_sd=
  update_tid=0.1057
  rid=0.1032
  vol_rid=0.1029
  detach_tid=0.0
  log=off
  noerror=off
  kdetach=off
  stale=off
  ncolumn=2
  raidlog=off
  guid={437e4c16-c15c-11df-af45-12b9dc21b0fb}
  mapguid={00000000-0000-0000-0000-000000000000}
#Record   20: type=0x5108121 flags=0    gen_flags=0x4  size=116
#Blocks: 16 17
vol  vol02
  use_type=fsgen
  fstype="
  comment="
  putil0="
  putil1="
  putil2="
  state="ACTIVE
  writeback=on
  writecopy=off
  specify_writecopy=off
  pl_num=1
  start_opts="
  read_pol=SELECT
  minor=56001
  user=root
  group=root
  mode=0600
  log_type=REGION
  len=10337792
  log_len=0
  update_tid=0.1057
  rid=0.1039
  detach_tid=0.0
  active=off
  forceminor=off
  badlog=off
  recover_checkpoint=16
  sd_num=0
  sdnum=0
  kdetach=off
  storage=off
  readonly=off
  layered=off
  apprecover=off
  recover_seqno=0
  recov_id=0
  primary_datavol=
  vvr_tag=0
  morph=off
  guid={4f30ed34-c15c-11df-8d11-64e42d11c61f}
  inst_invalid=off
  incomplete=off
  instant=off
  restore=off
  snap_after_restore=off
  iscachevol=off
  oldlog=off
  nostart=off
  norecov=off
  logmap_align=0
  logmap_len=0
  inst_src_guid={00000000-0000-0000-0000-000000000000}
  cascaded=off
#Record   21: type=0x5022 flags=0    gen_flags=0x4  size=72
#Blocks: 19
plex vol02-01
  comment="
  putil0="
  putil1="
  putil2="
  layout=STRIPE
  st_width=128
  sd_num=2
  state="ACTIVE
  log_sd=
  update_tid=0.1057
  rid=0.1042
  vol_rid=0.1039
  detach_tid=0.0
  log=off
  noerror=off
  kdetach=off
  stale=off
  ncolumn=2
  raidlog=off
  guid={4f30fa90-c15c-11df-a3fa-817f2e0ece2d}
  mapguid={00000000-0000-0000-0000-000000000000}
#Record   39: type=0x20123 flags=0    gen_flags=0x4  size=71
#Blocks: 44
sd   sde-02
  comment="
  putil0="
  putil1="
  putil2="
  dm_offset=5242880
  pl_offset=0
  len=5168896
  update_tid=0.1047
  rid=0.1044
  guid={4f31074c-c15c-11df-aa14-4e8910270113}
  plex_rid=0.1042
  dm_rid=0.1026
  minor=0
  detach_tid=0.0
  column=0
  mkdevice=off
  subvolume=off
  subcache=off
  stale=off
  kdetach=off
  relocate=off
  sd_name=
  uber_name=
  tentmv_src=off
  tentmv_tgt=off
  tentmv_pnd=off
  reclaim_pnd=off
  reclaim_done=off
#Record   40: type=0x20923 flags=0    gen_flags=0x4  size=73
#Blocks: 46
sd   sdf-02
  comment="
  putil0="
  putil1="
  putil2="
  dm_offset=5242880
  pl_offset=0
  len=5168896
  update_tid=0.1047
  rid=0.1046
  guid={4f31137c-c15c-11df-953c-0586370e646d}
  plex_rid=0.1042
  dm_rid=0.1027
  minor=0
  detach_tid=0.0
  column=1
  mkdevice=off
  subvolume=off
  subcache=off
  stale=off
  kdetach=off
  relocate=off
  sd_name=
  uber_name=
  tentmv_src=off
  tentmv_tgt=off
  tentmv_pnd=off
  reclaim_pnd=off
  reclaim_done=off
 

 

sunillp
Level 3

/etc/vx/diag.d/vxprivutil scan on <vsnap-device> shows following:

[root@imits074 ~]# /etc/vx/diag.d/vxprivutil scan /dev/vs/cli6in
diskid:  1284617397.15.imit2s002
group:   name=sharedg id=1284617401.17.imit2s002
flags:   shared autoimport cds
hostid:  linuxvcs
version: 3.1
iosize:  512
public:  slice=3 offset=65792 len=10411776
private: slice=3 offset=256 len=65536
update:  time: 1284545291  seqno: 0.13
headers: 0 240
configs: count=1 len=51360
logs:    count=1 len=4096
 

sunillp
Level 3

vxdisk list output for corresponding physical disk is:

[root@imit2s002 ~]# vxdisk list sdf
Device:    sdf
devicetag: sdf
type:      auto
clusterid: linuxvcs
disk:      name=sdf id=1284617397.15.imit2s002
group:     name=sharedg id=1284617401.17.imit2s002
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig shared autoimport imported
pubpaths:  block=/dev/vx/dmp/sdf3 char=/dev/vx/rdmp/sdf3
guid:      {08438ea2-c159-11df-b8d8-4c5bf12692f8}
udid:      Promise%5FVTrak%20E310f%5FDISKS%5F22AB0001554373BF
site:      -
version:   3.1
iosize:    min=512 (bytes) max=1024 (blocks)
public:    slice=3 offset=65792 len=10411776 disk_offset=0
private:   slice=3 offset=256 len=65536 disk_offset=0
update:    time=1284545291 seqno=0.13
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=51360
logs:      count=1 len=4096
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths:   1
sdf     state=enabled
 

sunillp
Level 3

We are facing this problem even on linux. fdisk -l output for snapshot device:

[root@imits074 ~]# fdisk -l /dev/vs/cli6in

Disk /dev/vs/cli6in (Sun disk label): 128 heads, 32 sectors, 2558 cylinders
Units = cylinders of 4096 * 512 bytes

         Device Flag    Start       End    Blocks   Id  System
/dev/vs/cli6in3  u          0      2558   5238784    5  Whole disk
/dev/vs/cli6in8  u          0      2558   5238784    f  Unknown
 

[root@imit2s002 ~]# fdisk -l /dev/sdf

Disk /dev/sdf (Sun disk label): 128 heads, 32 sectors, 2558 cylinders
Units = cylinders of 4096 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sdf3  u          0      2558   5238784    5  Whole disk
/dev/sdf8  u          0      2558   5238784    f  Unknown
 

sunillp
Level 3

After adding snapshot device (/dev/vs/cli6in) as foreigndevice to vxvm:

[root@imits074 ~]# vxddladm addforeign path=/dev/vs/cli6in

[root@imits074 ~]# vxdisk scandisks
[root@imits074 ~]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
cli6in       simple          -            -            online invalid

[root@imits074 ~]# vxdisk list cli6in
Device:    cli6in
devicetag: cli6in
type:      simple
flags:     online ready private foreign invalid
pubpaths:  block=/dev/vs/cli6in char=/dev/vs/cli6in
guid:      -
udid:      ATA%5FST3808110AS%5FOTHER%5FDISKS%5Fimits074%5F%2Fdev%2Fsda
site:      -

 

sunillp
Level 3

> Also curious to know.. did u anytime set any tpdmode ?  native or pseudo ? using vxddladm command...

No.

sunillp
Level 3

More output:

[root@imits074 ~]# /etc/vx/diag.d/vxprivutil list /dev/vs/cli6in
diskid:  1284617397.15.imit2s002
group:   name=sharedg id=1284617401.17.imit2s002
flags:   shared autoimport cds
hostid:  linuxvcs
version: 3.1
iosize:  512
public:  slice=3 offset=65792 len=10411776
private: slice=3 offset=256 len=65536
update:  time: 1284545291  seqno: 0.13
headers: 0 240
configs: count=1 len=51360
logs:    count=1 len=4096
tocblks: 0
tocs:    16/65520
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
 tagid    priv 065488-065503[000016]: tag=udid_scsi3=Promise%5FVTrak%20E310f%5FDISKS%5F22AB0001554373BF
 

Gaurav_S
Moderator
Moderator
   VIP    Certified

Sunil,

I think it is better if a Symantec employee answers this..

looking at all inputs from you above, To my understanding you will need SCSI commands support .... but I would suggest you to open a symantec support case here & get their official answer...

I was just thinking, the most common tpd drivers supported are for EMC (EMC power devices)... but there also SCSI commands are there (ioctls are managed by powerpath) ...

 

Gaurav

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi Sunil,

 

how did you go with this ? did you managed to log a case with Symantec ?

sunillp
Level 3

I think I did from whatever I found online, but not sure if that was the right way, as I havent recevied any answer for my query. However there was a mail from Symantec that they have received my mail. I thought u r an employee at Symantec and could help me with this.

Gaurav_S
Moderator
Moderator
   VIP    Certified

For a Symantec employee, you will see a tag "Symantec employee" in front of user name..... same way I have a tag of "Trusted advisor"... It will be far more quicker to have an answer if you follow the symantec support channel & get a case opened with tech support.

If you still can't do it, let me know & I will try to speak to forum admins & see what can be done..

 

Gaurav

sunillp
Level 3

Hi Gaurav,

 

Can you please speak to forum admins or give me a link or phone number.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Please go to this URL:

http://www.symantec.com/business/support/index?page=landing&key=15107

on the right-hand side of the screen you will find "Contacting Support".

There are various options - email, log a call online, etc.

Under Contact Technical Support you can select your country - a phone number will be displayed.

 

Please give us feedback. I have not been able to find any information about your snapshot method -not sure if it's supported.

View solution in original post

Gaurav_S
Moderator
Moderator
   VIP    Certified

Agree with Marianne.... open up a support case to get a quick answer.... let us know if you face any difficulty there..

 

Gaurav