β05-07-2015 03:21 PM
Server is in contriol of Vxvm,
We have to create a new filesystem wind_index1 in oradatadg diskgroup.
Task to do : On Server sydney, please add a new mount point /wind_index1 of 91GB.
Free size availavbe is 109 GB in disk group ora_datadg
Below is wind_index filesystem in oradat_dg and we have to create a new one as wind_index1
df -h |grep -i wind_index
/dev/vx/dsk/ora_datadg/wind_index 91G 90G 1.4G 99% /wind_index
vxdisk list
emcpower86s2 auto:simple oradg02 oradg online thin
emcpower87s2 auto:simple oradg01 oradg online thin
emcpower88s2 auto:simple ora_datadg01 ora_datadg online thin
emcpower89s2 auto:simple ora_datadg02 ora_datadg online thin
emcpower90s2 auto:simple ora_datadg07 ora_datadg online thin
emcpower91s2 auto:simple ora_datadg05 ora_datadg online thin
emcpower92s2 auto:simple ora_datadg03 ora_datadg online thin
emcpower93s2 auto:simple ora_datadg04 ora_datadg online thin
emcpower94s2 auto:simple ora_datadg08 ora_datadg online thin
emcpower95s2 auto:simple ora_datadg06 ora_datadg online thin
emcpower96s2 auto:simple ora_archdg01 ora_archdg online thin
vxprint output -
v oracle - ENABLED ACTIVE 20971520 SELECT - fsgen
pl oracle-01 oracle ENABLED ACTIVE 20971520 CONCAT - RW
sd oradg01-01 oracle-01 oradg01 0 20971520 0 emcpower87 ENA
v wind_oracle - ENABLED ACTIVE 62914560 SELECT - fsgen
pl wind_oracle-01 wind_oracle ENABLED ACTIVE 62914560 CONCAT - RW
sd oradg02-01 wind_oracle-01 oradg02 0 62914560 0 emcpower86 ENA
v wind_oradmp - ENABLED ACTIVE 41943040 SELECT - fsgen
pl wind_oradmp-01 wind_oradmp ENABLED ACTIVE 41943040 CONCAT - RW
sd oradg01-02 wind_oradmp-01 oradg01 20971520 41943040 0 emcpower87 ENA
sd oradg01-03 wind_temp-02 oradg01 62914560 20971520 0 emcpower87 ENA
Disk group: ora_archdg
v wind_oraarchive - ENABLED ACTIVE 211812352 SELECT - fsgen
pl wind_oraarchive-01 wind_oraarchive ENABLED ACTIVE 211812352 CONCAT - RW
sd ora_archdg01-01 wind_oraarchive-01 ora_archdg01 0 129941504 0 emcpower96 ENA
sd ora_archdg01-07 wind_oraarchive-01 ora_archdg01 141402112 81870848 129941504 emcpower96 ENA
v wind_oractl1 - ENABLED ACTIVE 1024000 SELECT - fsgen
pl wind_oractl1-01 wind_oractl1 ENABLED ACTIVE 1024000 CONCAT - RW
sd ora_archdg01-02 wind_oractl1-01 ora_archdg01 129941504 1024000 0 emcpower96 ENA
v wind_oractl2 - ENABLED ACTIVE 1024000 SELECT - fsgen
pl wind_oractl2-01 wind_oractl2 ENABLED ACTIVE 1024000 CONCAT - RW
sd ora_archdg01-03 wind_oractl2-01 ora_archdg01 130965504 1024000 0 emcpower96 ENA
v wind_oractl3 - ENABLED ACTIVE 1024000 SELECT - fsgen
pl wind_oractl3-01 wind_oractl3 ENABLED ACTIVE 1024000 CONCAT - RW
sd ora_archdg01-04 wind_oractl3-01 ora_archdg01 131989504 1024000 0 emcpower96 ENA
sd ora_archdg01-05 wind_redo1-01 ora_archdg01 133013504 4194304 0 emcpower96 ENA
sd ora_archdg01-06 wind_redo2-01 ora_archdg01 137207808 4194304 0 emcpower96 ENA
Disk group: ora_datadg
sd ora_datadg01-01 wind_blobs1-01 ora_datadg01 0 213307632 0 emcpower88 ENA
sd ora_datadg03-01 wind_blobs2-01 ora_datadg03 0 213307632 0 emcpower92 ENA
sd ora_datadg02-01 wind_index-01 ora_datadg02 0 190840832 0 emcpower89 ENA
sd ora_datadg02-02 wind_others-01 ora_datadg02 190840832 20971520 0 emcpower89 ENA
sd ora_datadg05-01 wind_sys-01 ora_datadg05 0 10485760 0 emcpower91 ENA
sd ora_datadg06-01 wind_tools-01 ora_datadg06 0 20971520 0 emcpower95 ENA
sd ora_datadg07-01 wind_undo-01 ora_datadg07 0 20971520 0 emcpower90 ENA
sd ora_datadg04-01 wind_user1-01 ora_datadg04 0 146800640 0 emcpower93 ENA
sd ora_datadg05-02 wind_user2-01 ora_datadg05 10485760 146800640 0 emcpower91 ENA
sd ora_datadg06-02 wind_user3-01 ora_datadg06 20971520 146800640 0 emcpower95 ENA
sd ora_datadg07-02 wind_user4-01 ora_datadg07 20971520 146800640 0 emcpower90 ENA
sd ora_datadg08-01 wind_user5-01 ora_datadg08 0 146800640 0 emcpower94 ENA
sd ora_datadg08-02 wind_wtdocs-01 ora_datadg08 146800640 41943040 0 emcpower94 ENA
sd ora_datadg04-02 wind_wtpart-01 ora_datadg04 146800640 62914560 0 emcpower93 ENA
sd ora_datadg05-03 wind_wttrans-01 ora_datadg05 157286400 10485760 0 emcpower91 ENA
Please suggest steps to perform the activity,kindly let me know if any more info required.
Thanks
Solved! Go to Solution.
β05-14-2015 03:16 AM
Terminolgy is a bit confusing in regards "shared diskgroup" as it can have 2 meanings (see https://www-secure.symantec.com/connect/articles/diskgroup-types-unix-and-windows, so I will give you the 3 possibilties:
Your diskgroup, ora_datadg is configured in VCS, so unless this service group only has one system in its systemlist, then this mean it must failover to the other node in the cluster so it is of type 2 above
The new filesytem you have created is in the same diskgroup, ora_datadg, so this diskgroup is shared in the sense it can be shared between the nodes so can be mounted on ONE or the other, but is not CVM shared which it does not need to be.
For resource links, then I would guess the new mount will have the same parents as the other resources which I would guess is the Oracle resource.
Mike
β05-07-2015 08:22 PM
Hi,
Try:
1. vxassist -g ora_datadg make wind_index1 91g
2. mkfs -F vxfs /dev/vx/rdsk/ora_datadg/wind_index1
if it's AIX , need use mkfs -V, if it's linux, need user mkfs -t
3. mkdir /wind_index1
4. mount -F vxfs /dev/vx/dsk/ora_datadg/wind_index1 /wind_index1
if it's AIX , need use -V, if it's linux, need user -t instead of -F
if need add to vcs, need more actions.
Regards
β05-08-2015 07:14 AM
When I checekd teh vfstab configuration these filesystems are not available.
But, when I checked in main.cf it shows like that
Mount mnt_wind_blobs2 (
MountPoint = "/wind_blobs2"
BlockDevice = "/dev/vx/dsk/ora_datadg/wind_blobs2"
FSType = vxfs
FsckOpt = "-y"
)
Mount mnt_wind_index (
MountPoint = "/wind_index"
BlockDevice = "/dev/vx/dsk/ora_datadg/wind_index"
FSType = vxfs
FsckOpt = "-y"
)
Mount mnt_wind_oraarchive (
MountPoint = "/wind_oraarchive"
BlockDevice = "/dev/vx/dsk/ora_archdg/wind_oraarchive"
FSType = vxfs
FsckOpt = "-y"
)
============================
Disks are auto and not shared
root@lyle# vxdg list
NAME STATE ID
oradg enabled 1145631633.68.lupien
appdg enabled 1135802559.341.lupien
ora_archdg enabled 1145635551.78.lupien
ora_datadg enabled 1145636901.92.lupien
vaultdg enabled 1135801061.265.lupien
In this condition how to proceed?
β05-08-2015 08:15 AM
Do steps as above:
1. vxassist -g ora_datadg make wind_index1 91g
2. mkfs -F vxfs /dev/vx/rdsk/ora_datadg/wind_index1
if it's AIX , need use mkfs -V, if it's linux, need user mkfs -t
3. mkdir /wind_index1
Then create resource in VCS in Java GUI (you can copy and paste an existing resource and change attributes) or from CLI:
hares -add Mount mnt_wind_index1 sg_name hares -modify mnt_wind_index1 MountPoint /wind_index hares -modify mnt_wind_index1 BlockDevice /dev/vx/dsk/ora_datadg/wind_index hares -modify mnt_wind_index1 FSType vxfs hares -modify mnt_wind_index1 FsckOpt %-y hares -modify mnt_wind_index1 Critical 0 hares -modify mnt_wind_index1 Enabled 1
Then online resource
hares -online mnt_wind_index1 -sys system
Confirm is online:
hares -state mnt_wind_index1
hen make critical (you make non-critical earlier in case there are any typos in resource)
hares -modify mnt_wind_index1 Critical 1
Then link to diskgroup resource
hares -link mnt_wind_index1 dg_res_name
Mike
β05-08-2015 01:10 PM
I checked and found that Disks of this diskgroup is not located on shared storage.
root@lyle# vxdisk list |grep -i ora
emcpower86s2 auto:simple oradg02 oradg online thin
emcpower87s2 auto:simple oradg01 oradg online thin
emcpower88s2 auto:simple ora_datadg01 ora_datadg online thin
emcpower89s2 auto:simple ora_datadg02 ora_datadg online thin
emcpower90s2 auto:simple ora_datadg07 ora_datadg online thin
emcpower91s2 auto:simple ora_datadg05 ora_datadg online thin
emcpower92s2 auto:simple ora_datadg03 ora_datadg online thin
emcpower93s2 auto:simple ora_datadg04 ora_datadg online thin
emcpower94s2 auto:simple ora_datadg08 ora_datadg online thin
emcpower95s2 auto:simple ora_datadg06 ora_datadg online thin
emcpower96s2 auto:simple ora_archdg01 ora_archdg online thin
============================================
root@lyle# vxdg list ora_datadg
Group: ora_datadg
dgid: 1145636901.92.lupien
import-id: 1024.167
flags:
version: 110
alignment: 8192 (bytes)
ssb: on
autotagging: off
detach-policy: global
dg-fail-policy: invalid
copies: nconfig=default nlog=default
config: seqno=0.2551 permlen=48346 free=48306 templen=19 loglen=7325
config disk emcpower88s2 copy 1 len=48346 state=clean online
config disk emcpower89s2 copy 1 len=48346 state=clean online
config disk emcpower90s2 copy 1 len=48346 state=clean online
config disk emcpower91s2 copy 1 len=48346 disabled
config disk emcpower92s2 copy 1 len=48346 disabled
config disk emcpower93s2 copy 1 len=48346 disabled
config disk emcpower94s2 copy 1 len=48346 state=clean online
config disk emcpower95s2 copy 1 len=48346 state=clean online
log disk emcpower88s2 copy 1 len=7325 disabled
log disk emcpower89s2 copy 1 len=7325
log disk emcpower90s2 copy 1 len=7325
log disk emcpower91s2 copy 1 len=7325
log disk emcpower92s2 copy 1 len=7325
log disk emcpower93s2 copy 1 len=7325 disabled
log disk emcpower94s2 copy 1 len=7325
log disk emcpower95s2 copy 1 len=7325 disabled
This seems that diskgroup is not shared.
Shall i proceed with the above suggested steps even if diskgroup is not shared?
Please assist.
β05-08-2015 01:13 PM
Who will be the parent and child resource to link, in this condition?
β05-09-2015 04:00 AM
Please reply to the above query as soon as possible as i have to perform this activity.
β05-09-2015 07:58 PM
I humbly request you to suggest on this.I did not receive reply since two days.
thanks.
β05-10-2015 08:15 PM
1. It's normal not in vfstab, but in main.cf.
2. for failover file system, dg should only import in one server, dg shouldn't in share sate.
β05-11-2015 05:39 AM
When you need urgent assistance over a weekend, it is best to log a Support call with Symantec.
All of us trying to assist here (including Symantec employees) are doing so in our own time.
When you have disk storage in a 'normal' failover cluster (not CVM/CFS) you need to list disks in a diskgroup with this command on both nodes:
vxdisk -o alldgs list
(you can pipe the output to |grep ora_datadg)
With above output, you will see that the disks will be seen by both nodes, but only imported on one node.
The node where the diskgroup and SG is deported/offline, will show the diskgroup in brackets in vxdisk output.
So, in a failover cluster, shared disks are seen by both nodes, but can only be imported (online) on one node.
β05-14-2015 03:16 AM
Terminolgy is a bit confusing in regards "shared diskgroup" as it can have 2 meanings (see https://www-secure.symantec.com/connect/articles/diskgroup-types-unix-and-windows, so I will give you the 3 possibilties:
Your diskgroup, ora_datadg is configured in VCS, so unless this service group only has one system in its systemlist, then this mean it must failover to the other node in the cluster so it is of type 2 above
The new filesytem you have created is in the same diskgroup, ora_datadg, so this diskgroup is shared in the sense it can be shared between the nodes so can be mounted on ONE or the other, but is not CVM shared which it does not need to be.
For resource links, then I would guess the new mount will have the same parents as the other resources which I would guess is the Oracle resource.
Mike