How to make VMs suck less

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Hey, I don't know. That's why I started this thread.

OK, seriously, how do people get half-decent disk I/O in a VM? I see published benchmarks where someone claims to get 99.99% of bare metal performance in their dedicated hypervisor (ESXi) VM.

In real life, I think 40% of bare metal is as good as it gets, and I'm finding really dismal results with customers' Windows VMs - the caching makes it look great until you exceed the cache size ...

I'm not confident in the results from MS DiskSpd with the Crystal GUI. 8-32GB test sizes are a must.

I notice that if I use 7-Zip (even 1 CPU core) to compress something highly compressible (>4:1) on Fastest setting, performance absolutely collapses on many VMs. Whereas even on a 5-year-old spinning disk PC with VMWare Workstation, I see 30MB/s. That's up to 5 times faster than what I'm seeing on a Windows VM on brand new hardware. :(

This whole thing started when I realized that extended database operations on a brand new customer server were at least 4 times slower than on my 5-year-old desktop.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Storage performance management is indeed tricky to master under esxi. If you want to get into the details can you post a little more of the storage hardware configuration of the system? Is this a direct attached setup via SATA/SAS or is this storage via fibre channel? Is the storage used as NFS or iSCSI? Is the storage shared among other esxi servers to allow for vmotion? What version of esxi are you using? What vmfs version is formatted on the datastore? What blocksize are you using on the vmfs datastore? Have you tried using a raw mapped device to see if there is a performance difference? Are the block sizes aligned for the underlying storage? Is the underlying storage advanced format or 512b? How many VMs use this LUN/datastore? Do you have any esxitop output from a performance run that you can share? Are you using thin provisioning of the vmdk on the vm or thick (eager or non)? How many snapshots (if any) are taken of this vm; snapshots are notorious for causing painful IO issues. Is this esxi server part of a vcenter? If so, are you able to extract the performance logs for the storage system? Can you increase the polling rate to get better resolution of the performance data? What's the average latency of the disk and the average latency of the datastore? What are the min and max latency values? Is there any memory contention on the esxi server such as compressed memory, swapped memory? Is the vm showing any signs of invoking the ballon driver?

I'm not yet convinced to say esxi sucks but it is a complicated environment to troubleshoot given the nature of the product being designed to purposefully add density to a system. If anything, the painful nature of all this is the understanding of the complexity and the lack of clear detail pointing the system administrator in a proper direction to service the issue. I'm happy to try and help but if you just wanted to vent about this I certainly understand. My goal isn't to convince you in any way that esxi does not suck but I can certainly appreciate the frustrations you're likely going through. I've spent many days over the years dealing with performance issues under esxi and the majority of them do come back to storage-related issues and/or memory contention problems. Both of those issues are usually the result of a user not understanding how to properly provision a VM in a complicated environment.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Both of those issues are usually the result of a user not understanding how to properly provision a VM in a complicated environment.

This is what I often see: Multiple VMs contending for a single, often under spec'd storage pool. When we trialed ESXi @ work, we used the Phoronix test suite and timed real world tasks and did not see a noticeable change in storage performance. Storage performance is very important to us, as we often bottle-neck there, even with fast storage. In our environment we still spec the same storage for VMs as bare metal - each system has a directly attached array using either RAID0, RAID1, or RAID 10 depending on the system's needs. This provides increased density and improved separation between systems, but ignores the common sales pitch of better hardware utilization using existing hardware.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
I often ran into the problem that the storage array that was provided to me consisted of many 4TB drives. We would often not run single drives for obvious reasons so instead we would put them into some kind of raid 6 or raid 10 which increases the total available size. When users see a lun/datastore that's 16TB+ in total capacity they think, wow I can fit a dozens of VMs into that amount of space. This may be true but the performance was typically awful. We were almost always better off requesting a storage array that had lots of 600GB 10K RPM drives vs the 4TB so that we could spread the VMs out across the disks.

Much to your point I provisioned luns from our storage arrays just like they would be for bare metal. I used to plan this all out in spreadsheets accounting for storage controllers, storage bays, the bus utilization, and then spindles. I would map the luns and dedicate them to specific projects to help balance out the load on EMC VNX arrays. Not all VMs were done this way, but mainly the ones that were sensitive to performance.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Yup, we specified a client use dedicated 15k SAS or SSD storage for each server. We setup the servers and noticed performance was abysmal during setup. They had ended up providing VMs connected to a SAN that used 7.2k RPM drives (with the drives potentially shared with multiple systems). They said it was fine because they were SAS drives -We refused to run the systems in production until the performance was resolved.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Storage performance management is indeed tricky to master under esxi. If you want to get into the details can you post a little more of the storage hardware configuration of the system? Is this a direct attached setup via SATA/SAS or is this storage via fibre channel? Is the storage used as NFS or iSCSI? Is the storage shared among other esxi servers to allow for vmotion? What version of esxi are you using? What vmfs version is formatted on the datastore? What blocksize are you using on the vmfs datastore? Have you tried using a raw mapped device to see if there is a performance difference? Are the block sizes aligned for the underlying storage? Is the underlying storage advanced format or 512b? How many VMs use this LUN/datastore? Do you have any esxitop output from a performance run that you can share? Are you using thin provisioning of the vmdk on the vm or thick (eager or non)? How many snapshots (if any) are taken of this vm; snapshots are notorious for causing painful IO issues. Is this esxi server part of a vcenter? If so, are you able to extract the performance logs for the storage system? Can you increase the polling rate to get better resolution of the performance data? What's the average latency of the disk and the average latency of the datastore? What are the min and max latency values? Is there any memory contention on the esxi server such as compressed memory, swapped memory? Is the vm showing any signs of invoking the ballon driver?

I'm not yet convinced to say esxi sucks but it is a complicated environment to troubleshoot given the nature of the product being designed to purposefully add density to a system. If anything, the painful nature of all this is the understanding of the complexity and the lack of clear detail pointing the system administrator in a proper direction to service the issue. I'm happy to try and help but if you just wanted to vent about this I certainly understand. My goal isn't to convince you in any way that esxi does not suck but I can certainly appreciate the frustrations you're likely going through. I've spent many days over the years dealing with performance issues under esxi and the majority of them do come back to storage-related issues and/or memory contention problems. Both of those issues are usually the result of a user not understanding how to properly provision a VM in a complicated environment.

While I understand all these issues on paper, the target platforms are hosted by customers so all we can see is the VM. Groups of customers are configured and supported by certain third party firms, so we can have a dialogue with them but need to be able to quantify what we are unhappy about. Obviously they don't know or they would not have setup the VMs in the way that they have.

I don't yet know which ones use ESXi - I was looking for the existence of VMWare Tools as a clue. How can you reliably tell if the VM is running the balloon driver?

I'm pretty sure some are running Hyper-V, and these concern me even more because it sounds more like a backyard solution. These seem to have aggressive storage caching that doesn't help much at all on my tests (really, only the writeback caching helps).

The sites run a database up to about 10GB in size. As configured, the database engine mostly relies on Windows to cache the data. A typical VM configuration is 2 cores with 4GB RAM, running Server 2012. Many of the sites are small and probably only have two physical servers (one for failover). I'd like to think our application was the main consumer, but at least one also runs a second database engine in a separate VM. It's not impossible that they're also trying to run Exchange, but I hope not.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Yup, we specified a client use dedicated 15k SAS or SSD storage for each server. We setup the servers and noticed performance was abysmal during setup. They had ended up providing VMs connected to a SAN that used 7.2k RPM drives (with the drives potentially shared with multiple systems). They said it was fine because they were SAS drives -We refused to run the systems in production until the performance was resolved.

How did you notice performance was abysmal? I need to be able to present empirical results to force people into action.

As I said, I haven't found DiskSpd as configured by Crystal to very useful so far. Running 7-zip uses most of one virtual CPU core (if configured that way) while exercising both reads and writes to storage. I'm seeing a 5:1 range in performance with this, which is more the sort of indicator I'm expecting.

We have a far more elaborate benchmark that runs on the database, but it's intended as an extreme test for bare metal and would bring any of these systems to their knees.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
The sites run a database up to about 10GB in size. As configured, the database engine mostly relies on Windows to cache the data. A typical VM configuration is 2 cores with 4GB RAM, running Server 2012. Many of the sites are small and probably only have two physical servers (one for failover). I'd like to think our application was the main consumer, but at least one also runs a second database engine in a separate VM. It's not impossible that they're also trying to run Exchange, but I hope not.
On a 2-CPU, 4GB RAM VM? Some people are overly optimistic.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
It's ok if you want to isolate a system with a simple low-resource task. I have a few VM configured like that, which often serve little teams of employees. People don't complain about performance issues.

But running Exchange on this is madness.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Hey, I don't know if they do. I was just trying to think of what else they might be running if they can only afford 2 cores and 4GB for us. :(
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
While I understand all these issues on paper, the target platforms are hosted by customers so all we can see is the VM. Groups of customers are configured and supported by certain third party firms, so we can have a dialogue with them but need to be able to quantify what we are unhappy about. Obviously they don't know or they would not have setup the VMs in the way that they have.

I don't yet know which ones use ESXi - I was looking for the existence of VMWare Tools as a clue. How can you reliably tell if the VM is running the balloon driver?

I'm pretty sure some are running Hyper-V, and these concern me even more because it sounds more like a backyard solution. These seem to have aggressive storage caching that doesn't help much at all on my tests (really, only the writeback caching helps).

The sites run a database up to about 10GB in size. As configured, the database engine mostly relies on Windows to cache the data. A typical VM configuration is 2 cores with 4GB RAM, running Server 2012. Many of the sites are small and probably only have two physical servers (one for failover). I'd like to think our application was the main consumer, but at least one also runs a second database engine in a separate VM. It's not impossible that they're also trying to run Exchange, but I hope not.

The VMware tools can be a clear sign but in their absence, you should be able to open the msinfo32.exe on windows and identify the System Manufacturer and System Model. They should identify as VMWare, Inc. and VMware 7,1 (or your version). On a Linux system you should be able to query dmidecode. If it's not VMware, I would expect some equivalent naming from Hyper-V.

Code:
sudo dmidecode | grep Manufacturer | grep -i vmware

In order for the balloon driver to be active, the VMware tools have to be installed. If the tools are not installed it's unlikely your VM is being pressured (inflated) to release memory by way of the balloon driver. I mention it because it's just one of those things I investigate as part or researching the dozens of potential reasons for performance problems in a VM environment.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
How did you notice performance was abysmal? I need to be able to present empirical results to force people into action.

The first sign was that a file system resize (we deployed an image and then increased the file system to fill the storage), which is an operation that typically takes ~2 minutes on bare metal took longer than normal (perhaps 10-20x longer than normal). So this indicated poor storage performance. The second sign was that a database import took many times longer than normal. This could have been a sign of poor CPU performance or storage.

I later used dd to isolate the issue to storage and was able to also use it to test if the issue had been resolved

Write test:
time dd if=/dev/zero of=/test.bin bs=1M count=10000 conv=fdatasync

Read test:
time dd if=/dev/sda4 of=/dev/null bs=1M count=10000

Using tools like dstat or iostat during the dd tests allowed me to verify and quantify storage performance metrics like IOPS & MBps.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
There's always the good old iometer benchmark that you can run on the VM itself. If the iops are abysmal, then you know the system is overworked or poorly optimized.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
The VMware tools can be a clear sign but in their absence, you should be able to open the msinfo32.exe on windows and identify the System Manufacturer and System Model. They should identify as VMWare, Inc. and VMware 7,1 (or your version). On a Linux system you should be able to query dmidecode. If it's not VMware, I would expect some equivalent naming from Hyper-V.

Thanks, the one I was citing in particular was indeed Hyper-V. I've thrashed the virtualized disk subsystem with Diskspd (Microsoft's version of IOMeter) and noted that the array is obviously RAID 5 (read throughput is 3 times write throughput), but performance is actually not too bad when the host is lightly loaded (Sunday). One of the problems with this one is that Windows file caching is consuming almost all available RAM, an issue possibly triggered by repeatedly backing up the database during the day. It still really sucks at 7-Zip though, which is something that so far, I cannot explain.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
My theory on the 7-Zip could be some implementation of CPU throttling or reservation system by the hypervisor. Both VMware esxi and Hyper-V products support the functionality to throttle the CPU through similar weighted methods of CPU time sharing. If you're hosting these VMs at a 3rd party provider it's possible they've done this to keep any one customer from bring their system to a grinding halt. I don't know how you can verify this from your VM's perspective. You may just have to ask the provider if they're implementing any kind of weighted priority for CPU throttling and what your VM gets as a priority. This in conjunction with a busy storage IO system could make for a slower system like you're seeing. If your VM had enough RAM I'd say try creating a RAM disk to remove the underlying storage as a variable and then compare your tests that way but I don't think with 4GB you could do this without causing more problems than you're solving.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,357
Location
Gold Coast Hinterland, Australia
One one to make VMs suck less... Support USB pass-through...

Note: Hyper-V (even on Server 2019) doesn't support USB pass through, instead you need to rely on an USB over IP solution... :mad:
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,357
Location
Gold Coast Hinterland, Australia
Both ESXi and KVM/QEMU support USB pass through (as does VMWare Workstation and VirtualBox on the desktop), but unfortunately there are times when you don't get to choose the hypervisor being used... (eg customer uses HyperV on all their VM servers)...

It took me by surprise since most (if not all other hypervisors) support USB passthrough (not to mention PCIe passthrough), yet MS's solution doesn't.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
IIRC Hyper-V Enhanced Session Mode + VMConnect is the "correct" way to share a USB device with a Hyper-V host and PCIe passthrough involves some Powershell BS to disable the device on the host so it's available for a guest to manage.
 
Top