Skip to main content

Change IO Scheduler for SSD and HDD server with Software RAID

Hi Guys,

It has been a while...

I faced a problem with IO on one of the RAID 1 Linux servers and the IO schedulers were the default cfq in the server. It doesn't matter if the drive is in software RAID or not.

If you are using SSD drives, then there are no mechanical parts in the HDD, so you need to change the IO schedulers to noop.

If the SSD drive is mounted as /hda, then below command will do it.

----------------------------------------------------------
echo noop > /sys/block/hda/queue/scheduler
----------------------------------------------------------

For the HDD drive, the scheduler that is recommended is `deadline` in case of poor performance problems.

----------------------------------------------------------
echo deadline > /sys/block/hdb/queue/scheduler
----------------------------------------------------------

In case of Software RAID, just make sure that the drive you are changing the scheduler is inside the RAID.

You can see the drives inside the software RAID using the command

----------------------------
cat /proc/mdstat
----------------------------

Comments

Popular posts from this blog

Qmail cheat sheet

http://supportlobby.com/blog/category/technical/qmail-technical/

========================
1) To check the mail queue in plesk from command line, you can use the command :

 /var/qmail/bin/qmail-qstat
========================

=========================
2) You can examine the queue with qmail-qread.

/var/qmail/bin/qmail-qread
=========================

=========================
3) From the qread command you get the message id . In the above example , let us assume one of the id is 524514 . Now you can find the file holding the email in/var/qmail/queue with find command.

# find /var/qmail/queue -iname 524514

/var/qmail/queue/remote/22/524514

/var/qmail/queue/mess/22/524514 (mail headers)

/var/qmail/queue/info/22/524514

4) From the mail header you get the IP address

vi /var/qmail/queue/mess/22/524514


Or

Shortcut for the cool guys
---------------------------------

find /var/qmail/queue -iname queu_id | grep mess | xargs less

=========================

=========================
4) If you wish …

Logical volume vmxxxx_img is used by another device - Error on LVM removal

Hi Folks,

I've faced error while trying to remove an LVM from the server.

The exact LVM error will be "Logical volume vmxxxx_img is used by another device" on executing the lvremove command.

Feel free to remove the following steps to remove the LVM from the server.

Note: Please remember to replace <id> with your VMID in the below section.

----------------------------------------------------------
dmsetup ls dmsetup info -c xen-vm<id>_img dmsetup remove xen-vm<id>_img lvremove -f /dev/xen/vm<id>_img ----------------------------------------------------------

Cheat sheet for Hardware RAID health check - Megaraid, Adaptec, 3wareraid and HPraid.

Hi Folks,

In another post "raid type" I've mentioned how to find if the server is running with Software or Hardware RAID.

Here, we can go ahead and check the health of the Hardware RAID array in the server.

The first thing we need to detect is the controller in which the raid is configured and you can easily get it from the another post that I mentioned earlier. However, I'm providing it here again for your convenience.

1. Login to the server as the root user.
2. Execute the following command.

/sbin/lspci -vv | grep -i raid

3. This will show the raid controller running on the server. The different types of controller I've worked with includes

a. Megaraid which uses megacli
b. Adaptec which uses arcconf.
c. 3ware raid which uses tw_cli
d. HPraid which uses hpacucli.

How to check the Megaraid array health status?

The Megaraid is usually installed in the /opt/MegaRAID and you can execute the following command to verify the health of the array.

/opt/MegaRAID/MegaCli…