cancel
Showing results for 
Search instead for 
Did you mean: 

VxVM 5.0 - siteread policy

mlukes
Level 3
Dear experts,

I did some tests with extended distance cluster where data were mirrored on VxVM level. I used default read policy (siteread) so I expected that VxVM would use just local plex (disks) for reading but both plexes were utilized.
I tried to set siteconsistent flag to on/off for DG and as well as for volume but results were always the same.
Could anyone advice me what is wrong in my configuration and/or in my expectations?

Thanks and regards,
Marek Lukes
7 REPLIES 7

ScottK
Level 5
Employee
Your expectations seem reasonable.
In terms of the configuration, can you provide some detail on the configuration?
#vxdctl list | grep siteid
#vxdisk -g yourdiskgroup listtag
#vxprint -g yourdiskgroup
#vxprint -Z -g yourdiskgroup

I'm wondering if site consistency might not have been set; some of the site commands are less verbose than I would like when returning status.
Can you also comment on the nature of the test?
Did the test simply verify that no reads were served from the remote plex, or was it measuring performance or latency?


mlukes
Level 3
Hi Scott,

first I would like to thank you for your answer. Below are details you asked for:
dc2srv1:/ > vxdctl list | grep siteid
siteid: dc2
dc2srv1:/ > vxdisk -g dgtest listtag
DEVICE NAME VALUE
c3t0d0 site dc2
c3t0d1 site dc2
c3t0d2 site dc2
c3t0d3 site dc2
c5t0d0 site dc1
c5t0d1 site dc1
c5t0d2 site dc1
c5t0d3 site dc1
dc2srv1:/sbin/init.d > vxprint -g dgtest
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dgtest dgtest - - - - - -

SR dc1 - ACTIVE - - - - -
SR dc2 - ACTIVE - - - - -

dm dc1disk001 c5t0d0 - 8345984 - - - -
dm dc1disk002 c5t0d1 - 8345984 - - - -
dm dc1disk003 c5t0d2 - 8345984 - - - -
dm dc1disk004 c5t0d3 - 8345984 - - - -
dm dc2disk001 c3t0d0 - 8345984 - - - -
dm dc2disk002 c3t0d1 - 8345984 - - - -
dm dc2disk003 c3t0d2 - 8345984 - - - -
dm dc2disk004 c3t0d3 - 8345984 - - - -

v test1 fsgen ENABLED 25030656 - ACTIVE - -
pl test1-01 test1 ENABLED 25030656 - ACTIVE - -
sd dc1disk001-01 test1-01 ENABLED 8345984 0 - - -
sd dc1disk002-01 test1-01 ENABLED 8345984 8345984 - - -
sd dc1disk003-01 test1-01 ENABLED 8338688 16691968 - - -
pl test1-02 test1 ENABLED 25030656 - ACTIVE - -
sd dc2disk001-01 test1-02 ENABLED 8345984 0 - - -
sd dc2disk002-01 test1-02 ENABLED 8345984 8345984 - - -
sd dc2disk003-01 test1-02 ENABLED 8338688 16691968 - - -
dc2srv1:/root > vxprint -Z -g dgtest
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
SR dc1 - ACTIVE - - - - -
SR dc2 - ACTIVE - - - - -

Test description: I used vxbench to generate load to a file system on a mirrored volume and monitored IO with vxtstat (below is its output):
Mon Jun 1 13:36:56 2009
dm dc1disk001 3668 0 57792 0 1.3 0.0
dm dc1disk002 4181 0 65864 0 1.3 0.0
dm dc1disk003 3919 0 61752 0 1.4 0.0
dm dc1disk004 0 0 0 0 0.0 0.0
dm dc2disk001 3667 0 57776 0 1.4 0.0
dm dc2disk002 4180 0 65848 0 1.2 0.0
dm dc2disk003 3916 0 61704 0 1.4 0.0
dm dc2disk004 0 0 0 0 0.0 0.0
Mon Jun 1 13:37:01 2009
dm dc1disk001 4393 0 69208 0 1.1 0.0
dm dc1disk002 4302 0 67772 0 1.2 0.0
dm dc1disk003 4809 0 75740 0 1.3 0.0
dm dc1disk004 0 0 0 0 0.0 0.0
dm dc2disk001 4394 0 69224 0 1.1 0.0
dm dc2disk002 4300 0 67740 0 1.1 0.0
dm dc2disk003 4811 0 75772 0 1.1 0.0
dm dc2disk004 0 0 0 0 0.0 0.0
Mon Jun 1 13:37:06 2009
dm dc1disk001 4421 0 69640 0 1.2 0.0
dm dc1disk002 4180 0 65860 0 1.1 0.0
dm dc1disk003 4222 0 66512 0 1.2 0.0
dm dc1disk004 0 0 0 0 0.0 0.0
dm dc2disk001 4420 0 69624 0 1.2 0.0
dm dc2disk002 4184 0 65916 0 1.3 0.0
dm dc2disk003 4221 0 66496 0 1.3 0.0
dm dc2disk004 0 0 0 0 0.0 0.0

Just one interesting thing I have realized right now is that AVG TIMEs for reads are almost equal for local and for remote array unlike writes which took about 0.5ms for local array and 1.8ms for remote array.

Regards,
Marek

ScottK
Level 5
Employee
Marek, I am going to ask a couple of other people to take a look at this.
In terms of the difference you measured, anecdotally I hear that the latency difference is often quite small, but also that it can vary quite a bit depending on the distance and the network
- Scott

Ashish_Yajnik
Level 3
Hello Marek,

I am asking an engineering expert to take a look at your data to see why reads are not happening from local plex only.

thanks,
Ashish

Rajesh_Chepuri
Level 2
Employee
Hello Marek,

From the vxstat output, it appears that the read policy is round-robin. Just to get confirmation on this, can you provide the output
of the following command?
"/usr/sbin/vxprint -g dgtest -F%read_pol test1"

The details below suggests a possible reason when a volume in a site-configured diskgroup will have read policy as "round".

A default read policy is given to a volume when it is created. So when volume is created with sites are configured in the diskgroup,
the new volume will get "siteread" as the default read policy. But if sites are not configured at the time of volume creation, a mirrored
volume will have "round" as the default read policy. So if sites are configured later then read policy should be explicity changed to
"siteread" using the command below

"/usr/sbin/vxvol -g dgtest rdpol siteread test1"

Thanks
Rajesh

mlukes
Level 3
Hello Rajesh,

thanks for your answer.

Unfortunately I had to destroy my test environment so I'm not able to repeat the same test right now. But I will be able to repeat the test soon in new clustered environment and I provide you its relusts also.  Nevertheless I did notes during the test so I have saved detailed configuration of dgtest, please, see below. As far as I understand "vxprint -htg dgtest" output SITEREAD policy should be enabled for volume test1. Am I right?
dc1srv1:/root > vxprint -htg dgtest
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg dgtest default default 23000 1243517776.58.dc1srv1

sr dc1 ACTIVE
sr dc2 ACTIVE

dm dc1disk001 c5t0d0 auto 32768 8345984 -
dm dc1disk002 c5t0d1 auto 32768 8345984 -
dm dc1disk003 c5t0d2 auto 32768 8345984 -
dm dc1disk004 c5t0d3 auto 32768 8345984 -
dm dc2disk001 c3t0d0 auto 32768 8345984 -
dm dc2disk002 c3t0d1 auto 32768 8345984 -
dm dc2disk003 c3t0d2 auto 32768 8345984 -
dm dc2disk004 c3t0d3 auto 32768 8345984 -

v test1 - ENABLED SYNC 307200 SITEREAD - fsgen
pl test1-01 test1 ENABLED ACTIVE 307200 STRIPE 3/4096 RW
sd dc1disk001-01 test1-01 dc1disk001 0 102400 0/0 c5t0d0 ENA
sd dc1disk002-01 test1-01 dc1disk002 0 102400 1/0 c5t0d1 ENA
sd dc1disk003-01 test1-01 dc1disk003 0 102400 2/0 c5t0d2 ENA
pl test1-02 test1 ENABLED ACTIVE 307200 STRIPE 3/4096 RW
sd dc2disk001-01 test1-02 dc2disk001 0 102400 0/0 c3t0d0 ENA
sd dc2disk002-01 test1-02 dc2disk002 0 102400 1/0 c3t0d1 ENA
sd dc2disk003-01 test1-02 dc2disk003 0 102400 2/0 c3t0d2 ENA

Best regards,
Marek
 

Rajesh_Chepuri
Level 2
Employee
Hello Marek,

It appears that the notes you pasted recently differs from the initial configuration that you had provided.
The volume "test1" that you initally provided has size "25030656", is a mirror-concat volume and is ACTIVE.
The volume "test1" that you pasted from your notes has size "307200", is a mirror-stripe volume and is in SYNC.

Do you have output of "vxprint -ht" for the initial configuration? Please make sure that you have the one
corresponding to the vxstat output that you pasted above. Also consider recreating the setup if it is possible for you.

Thanks
Rajesh