Posts

Showing posts from July, 2017

Cheat sheet for Hardware RAID health check - Megaraid, Adaptec, 3wareraid and HPraid.

Hi Folks, In another post " raid type " I've mentioned how to find if the server is running with Software or Hardware RAID. Here, we can go ahead and check the health of the Hardware RAID array in the server. The first thing we need to detect is the controller in which the raid is configured and you can easily get it from the another post that I mentioned earlier. However, I'm providing it here again for your convenience. 1. Login to the server as the root user. 2. Execute the following command. /sbin/lspci -vv | grep -i raid 3. This will show the raid controller running on the server. The different types of controller I've worked with includes a. Megaraid which uses megacli b. Adaptec which uses arcconf . c. 3ware raid which uses tw_cli d. HPraid which uses hpacucli . How to check the Megaraid array health status? The Megaraid is usually installed in the /opt/MegaRAID and you can execute the following command to verify the health of the arr

Logical volume vmxxxx_img is used by another device - Error on LVM removal

Hi Folks, I've faced error while trying to remove an LVM from the server. The exact LVM error will be "Logical volume vmxxxx_img is used by another device" on executing the lvremove command. Feel free to remove the following steps to remove the LVM from the server. Note: Please remember to replace <id> with your  VMID in the below section. ---------------------------------------------------------- dmsetup ls dmsetup info -c xen-vm<id>_img dmsetup remove xen-vm<id>_img lvremove -f /dev/xen/vm<id>_img ----------------------------------------------------------

File system corrupted on the OpenVZ container.

It happens rarely but when it does, it is hard to handle. The OpenVZ VM's are usually stable in nature, but I've encountered situations where OpenVZ VM was showing file system error and was refusing to start. You can follow the below steps to recover the VM. 1. ~# vzctl stop ctid 2. # ploop mount /vz/private/ctid/root.hdd/ DiskDescriptor.xml 3. now do a df -h and you will get the ploop id for the VM. 4. fdisk -l /dev/ploop12345 5. e2fsck /dev/ploop12345p1 6. ploop umount -d /dev/ploop12345 7. vzctl start ctid