Skip to main content

Massive I/O workload causes file system to be remounted read-only in VMware guest

I have been getting some weird errors in some Linux VMs running on VMware ESX 3.0.x (and perhaps ESX 3.5, as we're migrating to that). The symptoms are always that some weird errors occur for users or some maintenance task, and when you check the VM's consoles, you discover that the file system is suddenly read-only.

My dear friend MrVanes discovered there is a known issue with the VMware LSI driver mptscsi in VMware guests running Debian.
"...one could get SCSI timeouts when there is massive workload on the host system. Some kernel versions these will get the file systems remounted read-only, which probably makes sense for real hardware, but doesn't make sense for emulated hardware. Instead it should just wait a bit longer."

A savvy user called TuxyTurvy has thoroughly investigated this issue. He was using Red Hat RHEL 4 and 5 on Dell 1850 servers with a Clarion AX150i el-cheapo SAN. When I/O load got high, running under VMware, due to storage contention, SCSI timeouts and busy conditions are far more likely to occur than normal.

TuxyTurvy has a patch for RHEL 4U5 and 5. VMware would support him due to the HCL and Red Hat said that they would go with whatever VMware and LSI decided. I'm not sure if by now (his post is from October 2006) patches are available and if VMware maybe updated their LSI driver from VMwareTools, but I'll sure look into this and stress test some VMs! The post also has useful comments below it that shows the problem is not isolated.

Update:
Check your logs for this message:
"Nov 28 04:43:20 webhost kernel: [157267.731622] mptscsih: ioc0: task abort: SUCCESS (sc=f6840000)
Nov 28 04:43:20 webhost kernel: [157267.731600] mptscsih: ioc0: attempting task abort! (sc=f6840000)
Nov 28 04:43:20 webhost kernel: [157267.731622] mptscsih: ioc0: task abort: SUCCESS (sc=f6840000)
"

Comments

Popular posts from this blog

Preventing PuTTY timeouts

Just found a great tip to prevent timeouts of PuTTY sessions. I'm fine with timeouts by the host, but in our case the firewall kills sessions after 30 minutes of inactivity... When using PuTTY to ssh to your Linux/Unix servers, be sure to use the feature to send NULL packets to prevent a timeout. I've set it to once every 900 seconds, i.e. 15 minutes... See screenshot on the right.

Removing VGs or LVs from LVM

While are many excellent tutorials about creating and using LVM on Linux, not may show you how you can remove disks from LVM Volume Groups (VG) and reclaim storage or how to remove a Logical Volume (LV) from your LVM set-up. Here is what I did: Use -t to TEST ANY LVM action first! We are going to release 1 TB from LVM. The Volume group was extended with 1 TB storage to serve as a cheap NFS/CIFS file server when setting up our data center. It is now deprecated and replaced by a NAS so it's no longer needed. 1) check LVM; note the four 256 GB LUNs [root@server ~]# pvscan -v Wiping cache of LVM-capable devices Wiping internal VG cache Walking through all physical volumes PV /dev/sdb1 VG vgdata lvm2 [50.00 GB / 0 free] PV /dev/sdc1 VG vgdata lvm2 [256.00 GB / 0 free] PV /dev/sdd1 VG vgdata lvm2 [256.00 GB / 0 free] PV /dev/sde1 VG vgdata lvm2 [256.00 GB / 0 free] PV /dev/sdf1 VG vgdata lvm2 [256.00 GB / 0 free] PV /dev/sdg ...

Tuning the nscd name cache daemon

I've been playing a bit with the nscd now and want to share some tips related to tuning the nscd.conf file. To see how the DNS cache is doing, use nscd -g. nscd configuration: 0 server debug level 26m 57s server runtime 5 current number of threads 32 maximum number of threads 0 number of times clients had to wait yes paranoia mode enabled 3600 restart internal passwd cache: no cache is enabled [other zero output removed] group cache: no cache is enabled [other zero output removed] hosts cache: yes cache is enabled yes cache is persistent yes cache is shared 211 suggested size 216064 total data pool size 1144 used data pool size 3600 seconds time to live for positive entries 20 seconds time to live for negative entries 66254 cache hi...