We've been optimizing our kickstart script for unattended Oracle Linux provisioning and found out some interesting tidbits...
When rolling out Linux to new Dell PowerEdge 2950 boxes that boot from (DMX-3) SAN, the anaconda installer has issues with pre-existing LVM volume groups and physical volume signatures. The zerombr option alone is not enough. Appearantly, some traces remain on that disk/LUN and anaconda fails with a cryptic error message.
The solution is to use the kickstart pre-install section to clear your partitions! However, a simple parted won't do. Here is what we ended up putting in kickstart to effectively clear out /dev/sda and /dev/sdb, resp. our local disks (PERC) and out boot LUN (qlogic qle2460), from all partitions and LVM housekeeping stuff:
You can only have 4 primary partitions, so that's all you need.
Note: you could also do a 64KB zeroing of all involved boot devices (and even data LUNs). Just dd i=/dev/zero o=/dev/sdX with 64 KB of zeros. However, you may write over important data without knowing it. So we believe that in that case an error should happen so we can investigate what is going. Perhaps the SAN people have moved a LUN to us by accident, or we mistyped a LUN id somewhere and are using someone else's disk...
When rolling out Linux to new Dell PowerEdge 2950 boxes that boot from (DMX-3) SAN, the anaconda installer has issues with pre-existing LVM volume groups and physical volume signatures. The zerombr option alone is not enough. Appearantly, some traces remain on that disk/LUN and anaconda fails with a cryptic error message.
The solution is to use the kickstart pre-install section to clear your partitions! However, a simple parted won't do. Here is what we ended up putting in kickstart to effectively clear out /dev/sda and /dev/sdb, resp. our local disks (PERC) and out boot LUN (qlogic qle2460), from all partitions and LVM housekeeping stuff:
%pre
#forcefully remove all primary partitions from sda
parted /dev/sda rm 1
parted /dev/sda rm 2
parted /dev/sda rm 3
parted /dev/sda rm 4
#forcefully remove all primary partitions from sdb
parted /dev/sdb rm 1
parted /dev/sdb rm 2
parted /dev/sdb rm 3
parted /dev/sdb rm 4
#clear any LVM PV markers or signatures
lvm pvremove -ff /dev/sda1
lvm pvremove -ff /dev/sda2
lvm pvremove -ff /dev/sda3
lvm pvremove -ff /dev/sda4
lvm pvremove -ff /dev/sdb1
lvm pvremove -ff /dev/sdb2
lvm pvremove -ff /dev/sdb3
lvm pvremove -ff /dev/sdb4
You can only have 4 primary partitions, so that's all you need.
Note: you could also do a 64KB zeroing of all involved boot devices (and even data LUNs). Just dd i=/dev/zero o=/dev/sdX with 64 KB of zeros. However, you may write over important data without knowing it. So we believe that in that case an error should happen so we can investigate what is going. Perhaps the SAN people have moved a LUN to us by accident, or we mistyped a LUN id somewhere and are using someone else's disk...
Comments
------------------------------
clearpart --all --drives=sda,sdb
-------------------------------
However, I believe there were issues with signatures being left on partitions and while the installer didn't mind too much (and it would work for a basic Linux install), Oracle ASM did see the stuff left around and decided the disks were not clean and refused to use them. So we forcefully remove everything explicitely...
But I'm giving it a try anyway! :) thanks again!