cancel
Showing results for 
Search instead for 
Did you mean: 

new filesystem creation

Server is in contriol of Vxvm,

We have to create a new filesystem wind_index1 in oradatadg diskgroup.

Task to do : On Server sydney, please add a new mount point /wind_index1 of 91GB.

 

Free size availavbe is 109 GB in disk group ora_datadg

Below is wind_index filesystem in oradat_dg and we have to create a new one as wind_index1

 df -h |grep -i wind_index
/dev/vx/dsk/ora_datadg/wind_index    91G    90G   1.4G    99%    /wind_index

 

 

vxdisk list

emcpower86s2 auto:simple     oradg02      oradg        online thin
emcpower87s2 auto:simple     oradg01      oradg        online thin
emcpower88s2 auto:simple     ora_datadg01  ora_datadg   online thin
emcpower89s2 auto:simple     ora_datadg02  ora_datadg   online thin
emcpower90s2 auto:simple     ora_datadg07  ora_datadg   online thin
emcpower91s2 auto:simple     ora_datadg05  ora_datadg   online thin
emcpower92s2 auto:simple     ora_datadg03  ora_datadg   online thin
emcpower93s2 auto:simple     ora_datadg04  ora_datadg   online thin
emcpower94s2 auto:simple     ora_datadg08  ora_datadg   online thin
emcpower95s2 auto:simple     ora_datadg06  ora_datadg   online thin
emcpower96s2 auto:simple     ora_archdg01  ora_archdg   online thin

vxprint output -

 

v  oracle       -            ENABLED  ACTIVE   20971520 SELECT    -        fsgen
pl oracle-01    oracle       ENABLED  ACTIVE   20971520 CONCAT    -        RW
sd oradg01-01   oracle-01    oradg01  0        20971520 0         emcpower87 ENA
v  wind_oracle  -            ENABLED  ACTIVE   62914560 SELECT    -        fsgen
pl wind_oracle-01 wind_oracle ENABLED ACTIVE   62914560 CONCAT    -        RW
sd oradg02-01   wind_oracle-01 oradg02 0       62914560 0         emcpower86 ENA
v  wind_oradmp  -            ENABLED  ACTIVE   41943040 SELECT    -        fsgen
pl wind_oradmp-01 wind_oradmp ENABLED ACTIVE   41943040 CONCAT    -        RW
sd oradg01-02   wind_oradmp-01 oradg01 20971520 41943040 0        emcpower87 ENA
sd oradg01-03   wind_temp-02 oradg01  62914560 20971520 0         emcpower87 ENA
Disk group: ora_archdg
v  wind_oraarchive -         ENABLED  ACTIVE   211812352 SELECT   -        fsgen
pl wind_oraarchive-01 wind_oraarchive ENABLED ACTIVE 211812352 CONCAT -    RW
sd ora_archdg01-01 wind_oraarchive-01 ora_archdg01 0 129941504 0  emcpower96 ENA
sd ora_archdg01-07 wind_oraarchive-01 ora_archdg01 141402112 81870848 129941504 emcpower96 ENA
v  wind_oractl1 -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl wind_oractl1-01 wind_oractl1 ENABLED ACTIVE 1024000  CONCAT    -        RW
sd ora_archdg01-02 wind_oractl1-01 ora_archdg01 129941504 1024000 0 emcpower96 ENA
v  wind_oractl2 -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl wind_oractl2-01 wind_oractl2 ENABLED ACTIVE 1024000  CONCAT    -        RW
sd ora_archdg01-03 wind_oractl2-01 ora_archdg01 130965504 1024000 0 emcpower96 ENA
v  wind_oractl3 -            ENABLED  ACTIVE   1024000  SELECT    -        fsgen
pl wind_oractl3-01 wind_oractl3 ENABLED ACTIVE 1024000  CONCAT    -        RW
sd ora_archdg01-04 wind_oractl3-01 ora_archdg01 131989504 1024000 0 emcpower96 ENA
sd ora_archdg01-05 wind_redo1-01 ora_archdg01 133013504 4194304 0 emcpower96 ENA
sd ora_archdg01-06 wind_redo2-01 ora_archdg01 137207808 4194304 0 emcpower96 ENA
Disk group: ora_datadg
sd ora_datadg01-01 wind_blobs1-01 ora_datadg01 0 213307632 0      emcpower88 ENA
sd ora_datadg03-01 wind_blobs2-01 ora_datadg03 0 213307632 0      emcpower92 ENA
sd ora_datadg02-01 wind_index-01 ora_datadg02 0 190840832 0       emcpower89 ENA
sd ora_datadg02-02 wind_others-01 ora_datadg02 190840832 20971520 0 emcpower89 ENA
sd ora_datadg05-01 wind_sys-01 ora_datadg05 0  10485760 0         emcpower91 ENA
sd ora_datadg06-01 wind_tools-01 ora_datadg06 0 20971520 0        emcpower95 ENA
sd ora_datadg07-01 wind_undo-01 ora_datadg07 0 20971520 0         emcpower90 ENA
sd ora_datadg04-01 wind_user1-01 ora_datadg04 0 146800640 0       emcpower93 ENA
sd ora_datadg05-02 wind_user2-01 ora_datadg05 10485760 146800640 0 emcpower91 ENA
sd ora_datadg06-02 wind_user3-01 ora_datadg06 20971520 146800640 0 emcpower95 ENA
sd ora_datadg07-02 wind_user4-01 ora_datadg07 20971520 146800640 0 emcpower90 ENA
sd ora_datadg08-01 wind_user5-01 ora_datadg08 0 146800640 0       emcpower94 ENA
sd ora_datadg08-02 wind_wtdocs-01 ora_datadg08 146800640 41943040 0 emcpower94 ENA
sd ora_datadg04-02 wind_wtpart-01 ora_datadg04 146800640 62914560 0 emcpower93 ENA
sd ora_datadg05-03 wind_wttrans-01 ora_datadg05 157286400 10485760 0 emcpower91 ENA

 

 

Please suggest steps to perform the activity,kindly let me know if any more info required.

 

Thanks

 

 

 

1 Solution

Accepted Solutions
Accepted Solution!

Terminolgy is a bit confusing

Terminolgy is a bit confusing in regards "shared diskgroup" as it can have 2 meanings (see https://www-secure.symantec.com/connect/articles/diskgroup-types-unix-and-windows, so I will give you the 3 possibilties:

  1. Local diskgroup that can ONLY be seen by one host
  2. Shared diskgroup which can be seen by all nodes in a cluster, but is only imported on one node at a time
  3. CVM shared diskgroup which is imported on all nodes in the cluster at the same time

Your diskgroup, ora_datadg is configured in VCS, so unless this service group only has one system in its systemlist, then this mean it must failover to the other node in the cluster so it is of type 2 above

The new filesytem you have created is in the same diskgroup, ora_datadg, so this diskgroup is shared in the sense it can be shared between the nodes so can be mounted on ONE or the other, but is not CVM shared which it does not need to be.

For resource links, then I would guess the new mount will have the same parents as the other resources which I would guess is the Oracle resource.

Mike

View solution in original post

10 Replies

Hi,   Try: 1.  vxassist -g

Hi, 

 Try:

 

1.  vxassist -g ora_datadg  make  wind_index1 91g

2. mkfs -F vxfs  /dev/vx/rdsk/ora_datadg/wind_index1 

  if it's AIX , need use mkfs -V, if it's linux, need user mkfs -t

 

3. mkdir /wind_index1

4. mount -F vxfs /dev/vx/dsk/ora_datadg/wind_index1 /wind_index1

  if it's AIX , need use  -V, if it's linux, need user  -t  instead of -F

 

 

if need add to vcs, need more actions.

 

Regards

 

When I checekd teh vfstab

When I checekd teh vfstab configuration these filesystems are not available.

But, when I checked in main.cf it shows like that

Mount mnt_wind_blobs2 (
                MountPoint = "/wind_blobs2"
                BlockDevice = "/dev/vx/dsk/ora_datadg/wind_blobs2"
                FSType = vxfs
                FsckOpt = "-y"
                )

        Mount mnt_wind_index (
                MountPoint = "/wind_index"
                BlockDevice = "/dev/vx/dsk/ora_datadg/wind_index"
                FSType = vxfs
                FsckOpt = "-y"
                )

        Mount mnt_wind_oraarchive (
                MountPoint = "/wind_oraarchive"
                BlockDevice = "/dev/vx/dsk/ora_archdg/wind_oraarchive"
                FSType = vxfs
                FsckOpt = "-y"
                )

 

============================

Disks are auto and not shared 

 

root@lyle# vxdg list
NAME         STATE           ID
oradg        enabled              1145631633.68.lupien
appdg        enabled              1135802559.341.lupien
ora_archdg   enabled              1145635551.78.lupien
ora_datadg   enabled              1145636901.92.lupien
vaultdg      enabled              1135801061.265.lupien

 

 

In this condition how to proceed?

Do steps as above: 1.

Do steps as above:

1.  vxassist -g ora_datadg  make  wind_index1 91g

2. mkfs -F vxfs  /dev/vx/rdsk/ora_datadg/wind_index1 

  if it's AIX , need use mkfs -V, if it's linux, need user mkfs -t

3. mkdir /wind_index1

Then create resource in VCS in Java GUI (you can copy and paste an existing resource and change attributes) or from CLI:

hares -add Mount mnt_wind_index1 sg_name
hares -modify mnt_wind_index1 MountPoint /wind_index
hares -modify mnt_wind_index1  BlockDevice /dev/vx/dsk/ora_datadg/wind_index
hares -modify mnt_wind_index1  FSType vxfs
hares -modify mnt_wind_index1  FsckOpt %-y
hares -modify mnt_wind_index1  Critical 0 
hares -modify mnt_wind_index1  Enabled 1

 

Then online resource
 hares -online mnt_wind_index1  -sys system

 

Confirm is online:

 hares -state mnt_wind_index1 

 

hen make critical (you make non-critical earlier in case there are any typos in resource)

hares -modify mnt_wind_index1  Critical 1

 

Then link to diskgroup resource

hares -link mnt_wind_index1  dg_res_name

 

Mike

I checked and found

I checked and found that Disks of this diskgroup is not  located on shared storage.

 

root@lyle# vxdisk list |grep -i ora
emcpower86s2 auto:simple     oradg02      oradg        online thin
emcpower87s2 auto:simple     oradg01      oradg        online thin
emcpower88s2 auto:simple     ora_datadg01  ora_datadg   online thin
emcpower89s2 auto:simple     ora_datadg02  ora_datadg   online thin
emcpower90s2 auto:simple     ora_datadg07  ora_datadg   online thin
emcpower91s2 auto:simple     ora_datadg05  ora_datadg   online thin
emcpower92s2 auto:simple     ora_datadg03  ora_datadg   online thin
emcpower93s2 auto:simple     ora_datadg04  ora_datadg   online thin
emcpower94s2 auto:simple     ora_datadg08  ora_datadg   online thin
emcpower95s2 auto:simple     ora_datadg06  ora_datadg   online thin
emcpower96s2 auto:simple     ora_archdg01  ora_archdg   online thin

 

============================================

 

root@lyle# vxdg list ora_datadg
Group:     ora_datadg
dgid:      1145636901.92.lupien
import-id: 1024.167
flags:
version:   110
alignment: 8192 (bytes)
ssb:            on
autotagging:    off
detach-policy: global
dg-fail-policy: invalid
copies:    nconfig=default nlog=default
config:    seqno=0.2551 permlen=48346 free=48306 templen=19 loglen=7325
config disk emcpower88s2 copy 1 len=48346 state=clean online
config disk emcpower89s2 copy 1 len=48346 state=clean online
config disk emcpower90s2 copy 1 len=48346 state=clean online
config disk emcpower91s2 copy 1 len=48346 disabled
config disk emcpower92s2 copy 1 len=48346 disabled
config disk emcpower93s2 copy 1 len=48346 disabled
config disk emcpower94s2 copy 1 len=48346 state=clean online
config disk emcpower95s2 copy 1 len=48346 state=clean online
log disk emcpower88s2 copy 1 len=7325 disabled
log disk emcpower89s2 copy 1 len=7325
log disk emcpower90s2 copy 1 len=7325
log disk emcpower91s2 copy 1 len=7325
log disk emcpower92s2 copy 1 len=7325
log disk emcpower93s2 copy 1 len=7325 disabled
log disk emcpower94s2 copy 1 len=7325
log disk emcpower95s2 copy 1 len=7325 disabled

This seems that diskgroup is not shared.

Shall i proceed with the above suggested steps even if diskgroup is not shared?

Please assist.

 

Who will be the parent and

Who will be the parent and child resource  to link, in this condition?

Please reply to the above

Please reply to the above query as soon as possible as i have to perform this activity.

I humbly request you to

I humbly request you to suggest on this.I did not receive reply since two days.

thanks. 

1. It's normal not in vfstab,

1. It's normal not in vfstab, but in main.cf.

 

2. for failover file system, dg should only import in one server, dg shouldn't in share sate.

 

 

When you need urgent

When you need urgent assistance over a weekend, it is best to log a Support call with Symantec.

All of us trying to assist here (including Symantec employees) are doing so in our own time.

When you have disk storage in a 'normal' failover cluster (not CVM/CFS) you need to list disks in a diskgroup with this command on both nodes:

vxdisk -o alldgs list 
(you can pipe the output to |grep ora_datadg)

With above output, you will see that the disks will be seen by both nodes, but only imported on one node. 
The node where the diskgroup and SG is deported/offline, will show the diskgroup in brackets in vxdisk output.

So, in a failover cluster, shared disks are seen by both nodes, but can only be imported (online) on one node.

Accepted Solution!

Terminolgy is a bit confusing

Terminolgy is a bit confusing in regards "shared diskgroup" as it can have 2 meanings (see https://www-secure.symantec.com/connect/articles/diskgroup-types-unix-and-windows, so I will give you the 3 possibilties:

  1. Local diskgroup that can ONLY be seen by one host
  2. Shared diskgroup which can be seen by all nodes in a cluster, but is only imported on one node at a time
  3. CVM shared diskgroup which is imported on all nodes in the cluster at the same time

Your diskgroup, ora_datadg is configured in VCS, so unless this service group only has one system in its systemlist, then this mean it must failover to the other node in the cluster so it is of type 2 above

The new filesytem you have created is in the same diskgroup, ora_datadg, so this diskgroup is shared in the sense it can be shared between the nodes so can be mounted on ONE or the other, but is not CVM shared which it does not need to be.

For resource links, then I would guess the new mount will have the same parents as the other resources which I would guess is the Oracle resource.

Mike

View solution in original post