09-05-2013 07:41 PM
Hello,
I'm trying to install SF 5.1SP1RP2 on a Solaris 10 non-global zone. It fails saying:
System verification did not complete successfully
The following errors were discovered on the systems:
myzone is running in a local zone. installsfrac only supports systems running in the global zone.
I just need VxVM & VxFS, not RAC, but am using this package as it was already download by an SA. Is this not supported? Which version do I have to use for VxFS and VxVM within a non-global zone?
I've tried manually doing a pkgadd, same result.
Thanks
Prasad.
09-05-2013 10:24 PM
You can directly install packages directly inside non-global zone. If you install the package in global zone, it automatically gets installed into local zone based on the package property, SUNW_PKG_ALL_ZONES set to true.
VxVM never gets installed inside non-global zone and as it is not required inside zone. You need to create volume in global zone and export to local zone.
VxFS does not get installed inside non-global zone for 5.1SP1RP2, if you use SF6.0 or greater then VxFS gets installed inside running zone automatically.
Please paste the output/logs from the pkgadd command if you see an issue in installing in the global zone.
Regards,
Venkat.
09-06-2013 12:55 AM
If you have the SFRAC media, this should contain at least:
So you need to run 1 or 3.
As Venkat says,
VxVM & VxFS only run in the global zone, so you need to run the installer in the global zone.
After installation, you need to create diskgroups and volumes in the global zone - then you normally mount from the global zone into the local zone, unless you need raw disk access and then you add device (/dev/vx/dsk/dg_name/vol_name) to zone xml.
Mike
09-06-2013 05:30 AM
Thanks Venkat and Mike.
What I am trying to achieve, is to have a zone that mimics what we have on a physical box. The physical box has a VxFS filesystem (required for the application). So I am trying to install VxFS/VxVM inside the zone. The global has a lot of other zones, that I don't want to interfere with, and hence am not keen on installing SF on the global.
So, given the above requirement, looks like I SF5.1SP1RP2 will not work for me, and I need to go to SF6. Is this correct?
Thanks very much,
Prasad
09-06-2013 05:59 AM
I am not aware of 6.0 allowing you to install SF in a non-global-zone - looking at the 6.0 Veritas Storage Foundation and High Availability Solutions Virtualization Guide, I can't see anything that says you can install VRTSvxfs in a non-global zone and in fact it says
Administration commands are not supported in non-global zone
All administrative tasks, such as resizing a volume, adding a volume to a volumeset, and file system reorganization, are supported only in the global zone.Consequently, administrative commands, such as fsadm, fsvoladm, and vxassist,and administrative ioctls are not supported in the non-global zone by both VxFSand VxVM.
09-06-2013 06:27 AM
Understood Mike. So is there a solution to what I'm trying to achieve?
From within a zone, I need to be able to see a vxfs filesystem (when I do a df -h).
09-07-2013 12:20 PM
Mike,
What I meant was with SF 6.0 VRTSvxfs package gets installed inside zone where you can mount he vxfs file system directly inside zone.
For example,
local-zone# mount -F vxfs /dev/vx/dsk/mydg/myvol /mnt1
local-zone# mount |grep mnt1
The above command will show the vxfs file system inside zone.
You can find more information "SFHA virtualizaton guide"
sfhas_virtualization_601_sol.pdf
Please note that VCS do not support mounting of the vxfs direclty inside zone with 6.0/6.0.1, we are enabling this in the next release (6.1). But you can configure these mount points as part of zone configuration </etc/zones/zone_name.xml) where Zone boot takes care of mounting of the vxfs filesystems.
SFRAC is upported inside zones with 6.0.1. please refer below document.
Regards,
Venkat
09-14-2013 05:46 AM
Thanks Venkat and Mike.
So following your advise, installation of VxVM/VxFS in global (LDOM) and exporting it to zone worked. Actually CFS on globals, between two LDOMs, exported to two zones on the two LDOMs. No issues there.
Only issue is upon a reboot. When either/both of the globals (LDOM) is rebooted, the zone is brought up first, CFS starts later. Sometimes we find no issues, sometimes CFS complains the mount-point is busy (apparently held by the zone that booted up earlier).
Any patch that fixes it? We could most likely put in sleeps in the zone start up script, but would like to avoid kluges.
Thanks,
Prasad.