Skip to main content

Change IO Scheduler for SSD and HDD server with Software RAID

Hi Guys,

It has been a while...

I faced a problem with IO on one of the RAID 1 Linux servers and the IO schedulers were the default cfq in the server. It doesn't matter if the drive is in software RAID or not.

If you are using SSD drives, then there are no mechanical parts in the HDD, so you need to change the IO schedulers to noop.

If the SSD drive is mounted as /hda, then below command will do it.

----------------------------------------------------------
echo noop > /sys/block/hda/queue/scheduler
----------------------------------------------------------

For the HDD drive, the scheduler that is recommended is `deadline` in case of poor performance problems.

----------------------------------------------------------
echo deadline > /sys/block/hdb/queue/scheduler
----------------------------------------------------------

In case of Software RAID, just make sure that the drive you are changing the scheduler is inside the RAID.

You can see the drives inside the software RAID using the command

----------------------------
cat /proc/mdstat
----------------------------

Popular posts from this blog

Qmail cheat sheet

http://supportlobby.com/blog/category/technical/qmail-technical/

========================
1) To check the mail queue in plesk from command line, you can use the command :

 /var/qmail/bin/qmail-qstat
========================

=========================
2) You can examine the queue with qmail-qread.

/var/qmail/bin/qmail-qread
=========================

=========================
3) From the qread command you get the message id . In the above example , let us assume one of the id is 524514 . Now you can find the file holding the email in/var/qmail/queue with find command.

# find /var/qmail/queue -iname 524514

/var/qmail/queue/remote/22/524514

/var/qmail/queue/mess/22/524514 (mail headers)

/var/qmail/queue/info/22/524514

4) From the mail header you get the IP address

vi /var/qmail/queue/mess/22/524514


Or

Shortcut for the cool guys
---------------------------------

find /var/qmail/queue -iname queu_id | grep mess | xargs less

=========================

=========================
4) If you wish …

Logical volume vmxxxx_img is used by another device - Error on LVM removal

Hi Folks,

I've faced error while trying to remove an LVM from the server.

The exact LVM error will be "Logical volume vmxxxx_img is used by another device" on executing the lvremove command.

Feel free to remove the following steps to remove the LVM from the server.

Note: Please remember to replace <id> with your VMID in the below section.

----------------------------------------------------------
dmsetup ls dmsetup info -c xen-vm<id>_img dmsetup remove xen-vm<id>_img lvremove -f /dev/xen/vm<id>_img ----------------------------------------------------------

File system corrupted on the OpenVZ container.

It happens rarely but when it does, it is hard to handle. The OpenVZ VM's are usually stable in nature, but I've encountered situations where OpenVZ VM was showing file system error and was refusing to start.

You can follow the below steps to recover the VM.


1. ~# vzctl stop ctid

2. # ploop mount /vz/private/ctid/root.hdd/DiskDescriptor.xml
3. now do a df -h and you will get the ploop id for the VM.
4. fdisk -l /dev/ploop12345
5. e2fsck /dev/ploop12345p1
6. ploop umount -d /dev/ploop12345
7. vzctl start ctid