Small but Full-fat VMWare install.

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
It looks like I may have won funding to go all the way to "Fault Tolerance" in my VMWare configuration at work. Due to recent streamlining in our servers and applications, all the current servers can run on a single hex-core i7 with some SSDs if necessary. I think this means I should be able to get away with three servers:

ESXi #1
ESXi #2
NAS/iSCSI host

I was also reading about people running the iSCSI host as a VM on one of the machines. Which brings me to my first question:

How can it be fault tolerant if there is only one storage machine? Surely this single point of failure negates much of the tolerance? Is failure of the storage system so much less likely that this is a common scenario? Can the iSCSI host also have fail-over redundancy somehow?

Based on my understanding of the VMWare licensing, fault tolerance requires vSphere Enterprise @ $3594/processor (license plus one year support) and an instance of vCenter. As I am only running two ESXi machines, vCenter Foundation should work (supports up to 3 servers) @ $2140 (license plus one year support). So that is $9328 in licenses, which I can handle. Anyone familiar enough to back this up?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,795
Location
I am omnipresent
Plenty of businesses just have one NAS and think that means they have a backup. We had a thread about it earlier in the year. The only thing I can think is that if you're managing your volumes so that you have something like a RAID10 or RAID51 setup + spare disks and a spare PSU, you're probably going to be OK for most purposes.

Also from what I read, performance for something like FreeNAS as a guest OS usually isn't very good, though I'll admit I haven't tried it yet.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
That is my biggest concern, really. At the moment I have fantastic performance with direct-attached SSDs on each ESXi server. I know I won't get that level of performance with even the same disks attached across a network, but I don't know how big a hit it will be. I also don't know the performance of putting a bunch of 250GB Intel SSDs into a ZFS pool and letting 'er rip.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
It looks like I may have won funding to go all the way to "Fault Tolerance" in my VMWare configuration at work. Due to recent streamlining in our servers and applications, all the current servers can run on a single hex-core i7 with some SSDs if necessary. I think this means I should be able to get away with three servers:

ESXi #1
ESXi #2
NAS/iSCSI host

Sounds like a fun project, I hope it ends up meeting the requirements. The Fault Tolerance (FT) feature has some limitations with regards to virtualized resources, so I hope that whatever you plan to virtualize and run in FT mode, fits the requirements before you undertake this project. The list of requirements can be seen here and an FAQ here. There is more requirements and restriction information available here. Basically you'll need to see if your intended GuestOS is on the compatibility list along with the physical hardware you're using. You'll also need to make sure the intended size of RAM (64GB MAX), CPUs (only 1 vCPU supported), disks (16), and many other items are listed in the last URL.

I was also reading about people running the iSCSI host as a VM on one of the machines. Which brings me to my first question:

How can it be fault tolerant if there is only one storage machine? Surely this single point of failure negates much of the tolerance? Is failure of the storage system so much less likely that this is a common scenario? Can the iSCSI host also have fail-over redundancy somehow?

This one is tricky and certainly feasible. You can run a NAS device as a VM and share iSCSI targets. I haven't confirmed this first hand, but in theory, I don't see why this wouldn't work. However, I wouldn't recommend it.

There are two parts to answer your second question. First, it wouldn't be completely fault tolerant in this situation for the reasons you've suggested. This is why I wouldn't recommend a VM as your iSCSI/NFS storage device.

The second part can be answered in the case of a physical storage array. In the case of physical storage arrays on the enterprise level (which may typically be associated with a customer who is considering shelling out the money for an enterprise license) is that many arrays have multiple storage processors which are intended on handling failures. Each SP will have it's own power supply, fans, etc. In the array I manage at work, there are two storage processors. Each SP has two (often times more) Fibrechannel cable connections which run to two different FC HBAs inside the ESXi host making sure to crisscross paths. Something like this:

SPA 1 -> HBA 1 port 1
SPA 2 -> HBA 2 port 1

SPB 1 -> HBA 1 port 2
SPB 2 -> HBA 2 port 2
(normally there might be a couple of FC switches in this mix, but I left them out to remove the complexity in the illustration)

Anyway, in the case of an SP failure (SPA in this example), the multipathing is covered and the array issues a LUN tresspass from SPA to SPB which in turn continues IO operation to the LUN(s) without downtime. ESXi is multipath-aware and you can configure how it manages the HBA ports.

Also, the storage array is also provided with its own enormous battery for the case of a power failure. It would wait for a specified period of time before issuing abort commands and then destaging the data in cache back on to the disks before powering down.

In larger environments, two storage arrays would be used and connected in redundant separate paths with other mirroring technologies to allow for a complete array failure. The LUNs would be mirrored in full-sync between the two seperate storage arrays. That's how the storage side of this can be managed as a fault tolerant...but at the downside of a much increase in cost. Only the business can dictate how much it costs to have downtime and what the rate per minute would cost the company in the event of a failure and downtime. That's how some of these configs get justified for our financial partners.


Long story short, you've identified the weakest link in your configuration as your storage device and only you can make the call if it's worth considering an FT configuration with this exception.

You will also want to make sure the networking component is also redundant and sufficient for an FT implementation. Gigabit Ethernet is the minimum and 10Gb is often recommended in order to keep the FT VM in sync if there will be multiple FT implementations. Ths will also need to be a dedicated connection. If it cannot keep up, there will some performance issues. There is a nice PDF with lots of diagrams and a day in the life of an FT VM here.

Based on my understanding of the VMWare licensing, fault tolerance requires vSphere Enterprise @ $3594/processor (license plus one year support) and an instance of vCenter. As I am only running two ESXi machines, vCenter Foundation should work (supports up to 3 servers) @ $2140 (license plus one year support). So that is $9328 in licenses, which I can handle. Anyone familiar enough to back this up?

The licensing components aren't my strongest area because it's not something we typically have to worry about. I know that's a poor excuse and it certainly blinds me to the pain that real-paying customers have to deal with this.

Yes, you are correct that you will need at least the vSphere enterprise license in order to get the FT feature enabled. Make sure that the amount of vRAM this license offers meets your needs, but also remember that in a FT configuration that the mirrored (invisible) host that will be running on the second ESXi will consume the same amount of vRAM with respect to the limit on your license. For the utmost clarity, I would suggest calling VMware with what your needs are and get a quote.

Keep in mind that you can setup and configure this entire environment and use the unrestricted 60-day trial to ensure the configuration meets your needs before buying licenses.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,795
Location
I am omnipresent
One interesting thing about ZFS is that it supports cache drives. You can add SSDs to a storage pool, mark them as cache and that's how ZFS will use them, while maintaining the monstrous storage capacity from whatever spinning disks that happen to be in the same pool.

You can also do link aggregation across two ports that you might already have in your servers, or four ports with a $250 NIC, which could help to alleviate the bottleneck. There's no way you're going to get local SSD performance out of anything else, but having a NAS as an iSCSI target is apparently a common implementation for VMware stuff.

I guess you could keep the SSDs in the servers and just copy regular snapshots to the iSCSI target, but I'm not sure how close that would be to fault tolerance, either.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
One interesting thing about ZFS is that it supports cache drives. You can add SSDs to a storage pool, mark them as cache and that's how ZFS will use them, while maintaining the monstrous storage capacity from whatever spinning disks that happen to be in the same pool.

You can also do link aggregation across two ports that you might already have in your servers, or four ports with a $250 NIC, which could help to alleviate the bottleneck. There's no way you're going to get local SSD performance out of anything else, but having a NAS as an iSCSI target is apparently a common implementation for VMware stuff.

I guess you could keep the SSDs in the servers and just copy regular snapshots to the iSCSI target, but I'm not sure how close that would be to fault tolerance, either.

I agree with the link aggregation when considering an iSCSI or NFS solution and also dedicating a switch for your storage management (in fact two if you want the redundancy). I would recommend two dual-port vs a single quad if you want to eliminate adapter failure as a possible failure point. The largest pain point will be finding an adequate NAS solution that can handle the performance characteristics that you're use to today. I don't think FreeNAS or OpenFiler are up to the task of a production-run environment. I've heard (but not confirmed or tried) StarWind as a better (paid) solution for iSCSI implementation.

Not entirely true. A FC-attached EMC Symmetrix VMAX will absolutely compare and can exceed that of a locally attached SSD. Price is of course the barrier in this comparison.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,795
Location
I am omnipresent
Not entirely true. A FC-attached EMC Symmetrix VMAX will absolutely compare and can exceed that of a locally attached SSD. Price is of course the barrier in this comparison.

$1000 SSD vs. $50,000 in Fiber Channel infrastructure. Hm...
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
vSphere Storage Appliance

Don't know why I didn't see this earlier, but it looks pretty awesome.

I haven't seen that until now either. It must be new with the vSphere 5.0 release. It looks like it should offer what you need and also mirrors between local disks which is pretty awesome. There is a 60-day trial that you can perform some tests with to see if it'll meet your needs. :)
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Thanks Handruin for the warnings regarding FT....it looks like High Availability will be the best that my most critical VMs would be eligible for, and I don't want to splash out on HCL'd hardware.

That means that if I was willing to give up storage vMotion I could get away with the Essentials Plus Kit which included the vSphere Storage Appliance for ~$11k including support for 3 hosts (making my system even more resilient).
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
$1000 SSD vs. $50,000 in Fiber Channel infrastructure. Hm...

You made the claim: "There's no way you're going to get local SSD performance out of anything else..." I was just disputing that at the expense of cost, which I clearly indicated. It's possible, but you have to pay for it.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,795
Location
I am omnipresent
You made the claim: "There's no way you're going to get local SSD performance out of anything else..." I was just disputing that at the expense of cost, which I clearly indicated. It's possible, but you have to pay for it.

Can a SAN actually match an SSD's low latency?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Thanks Handruin for the warnings regarding FT....it looks like High Availability will be the best that my most critical VMs would be eligible for, and I don't want to splash out on HCL'd hardware.

That means that if I was willing to give up storage vMotion I could get away with the Essentials Plus Kit which included the vSphere Storage Appliance for ~$11k including support for 3 hosts (making my system even more resilient).

FT is a tricky feature that has a limited application for most IT shops. For most of the products we develop and test as virtual appliance, we don't consider FT because of the single vCPU limitation. I don't know if this is what broke the deal for you, but I suspect it may have been the reason.

The storage vMotion feature is nice, but unless you need to dynamically migrate your VMs for maintenance or performance reasons, it's not worth the extra cash for you. You can always switch the licensing mode from your purchased license into the 60-day trial to gain access to every feature (including storage vmotion) temporarily. :) That is, if you run into a situation where you are desperate to move a live VM.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
I don't think storage vMotion would be a big deal if I was using the Storage Appliance; if the VMs are redundant on the other machines anyway, just shut down the clients, pull the machine, and power them up elsewhere.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
FT is a tricky feature that has a limited application for most IT shops. For most of the products we develop and test as virtual appliance, we don't consider FT because of the single vCPU limitation. I don't know if this is what broke the deal for you, but I suspect it may have been the reason.

Quite. The Oracle 10g and MS SQL databases and their associated application servers would be the most important, and they all require 2-4 vCPUs.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Can a SAN actually match an SSD's low latency?

That's a hard question to answer because the better question to ask is if a SAN can match the latency to a direct attached storage bus. I know what you're asking, but the reason I make this distinction is that in the case of a VMAX, you have 1TB of cache to work with before the data even gets destaged to the EFD (SSD). Assuming the VMAX is not under other IO and the 1TB is at the disposal for the test, it's essentially a faster magnetic media than an SSD. That leaves the SAN as the slowest link, and assuming we are not talking about a SAN connection that is 5km away, a locally connected SAN that is no more than a few meters away can have low-enough latency to make the difference a moot point. I think it's roughly 10 nanoseconds/meter latency. There will likely be cases when 50-100 hosts connected to the SAN will begin causing latency pain points, but with respect to the VMAX array, it has eight engines to manage the IO. Hopefully all eight will not be on the same SAN switch.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Downloading the trial of the Essentials Plus Kit including the vSphere Storage Appliance. Hopefully I'll be able to build some test machines tomorrow and get some testing done. It will also be a good time to see if an i7-2600 with 32GB of RAM (4x8GB) is enough to do the deed; that is much cheaper than going the 1366 route and 6x4GB.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Good luck with it. I'm really curious how the VSA works out for you and those SSDs. I also think there is a vCenter appliance available now so that you no longer need to setup a windows host with SQL Server. Grab that too and use the trial.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
I also think there is a vCenter appliance available now so that you no longer need to setup a windows host with SQL Server. Grab that too and use the trial.

Thanks for that. I've now directed my three T-1s to download 10GB overnight...that will keep them busy ;)
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Not entirely true. A FC-attached EMC Symmetrix VMAX will absolutely compare and can exceed that of a locally attached SSD.

Even ignoring price, this makes some heroic assumptions. Firstly, that the working set is mostly contained within the "up to 1TB" SDRAM cache, and secondly, that the locally attached SSD is not an ultra-low-latency implementation such as Ddrueding's Revo cards, the latest version of which can hit 130,000 IO/s.

That leaves the SAN as the slowest link, and assuming we are not talking about a SAN connection that is 5km away, a locally connected SAN that is no more than a few meters away can have low-enough latency to make the difference a moot point. I think it's roughly 10 nanoseconds/meter latency.

By way of reference, the speed of light translates to ~3 nanoseconds/meter. If we ignore encoding schemes for the sake of argument, gigabit ethernet translates to just 1 nanosecond/bit, 10 gigabit to 0.1 nanosecond. So your latencies are absolutely huge in comparison. ;)
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Sure, I agree. I didn't spell it out, but I do realize the working set in this example had to be contained in the 1TB. Most SSDs aren't 1TB in size, so I felt it was a safe bet. I wasn't comparing this to a PCIe-attached drive.

The sources I read suggested it was ~5 nanoseconds/meter per direction with the total round trip being ~10. If the comparison is against 10Gb Ethernet, that's also an option to be used instead of FC. There are 10Gb FCoE implementations available, but I don't know what the total overhead is.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
I don't use the PCIe SSD cards in ESXi boxes; getting the drivers sorted was a PITA. Perhaps the new version 5 has them built-in, but SATA is fine for this purpose.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Has anyone here successfully installed the vCenter VM? It imports from the .ovf correctly, boots, and then fails with some filesystem errors.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
I'll give it a try and see if I get the same error here. I forgot to ask, what is the storage you're installing it to? Are you using the VSA, or is this on a local drive?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
I haven't set up the VSA yet; I just installed ESXi 5.0 on a machine, connected using the vSphere client downloaded from that machine, and imported the .ovf. It automatically pulled the .vmdk files from the same location and didn't ask me any other questions.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
I got sidetracked with other work. I'm deploying it now using the OVF just like you. I logged into one of my ESX servers directly (not vCenter, because I assume you don't already have vCenter otherwise why else would you be deploying it). I picked the thin-disk option and put it on a SAN drive since none of the internal drives are large enough. It's deploying now, I'll see if I get the same disk error. I wonder if one of the VMDK files you downloaded became corrupted or something? I saw the screen capture asked for an FSCK be run on it.

If this doesn't work, you can install vCenter onto a windows host using the ISO files. It just means you'll either need to supply your own SQL server or use the SQL Server express that's built in.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
After downloading a copy of the vCenter 5 appliance and then deploying it using the OVF, it was able to power on successfully without issue. It's now at the menu screen, but complaining about having no network config (which is to be expected). Perhaps one or both of the vmdk files didn't download cleanly or when installed there was an issue in the transfer? I know it's a pain, but did you try downloading them again and try to redeploy?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Win7 x64 is not supported for vCenter...worth a try.

I guess I could build a Server 2008 machine just to hold vCenter and my Veeam backup program...
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
For the record Intel Desktop Board DQ67SW will only support 32GB of RAM after a BIOS update to the new version released 11/11/11
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Win7 x64 is not supported for vCenter...worth a try.

I guess I could build a Server 2008 machine just to hold vCenter and my Veeam backup program...

I'm using a server 2008 VM for my vCenter right now. I use a separate Server 2008 for my SQL server 2008. I think it will have to be a 64-bit Server 2008. I then joined the vCenter VM to our corp domain and manage the vCenter logins using the NT credentials. I don't know how many people you'll be giving access to your vCenter, but it will help with managing permissions, accounts, and passwords since it's all handled by the domain controller.

How much RAM was supported on that motherboard before the BIOS update?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,795
Location
I am omnipresent
Only SBS2011. It's still Server 2008, just with newer versions of Exchange and SQL Server and a ton of retarded new licensing options.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Perhaps one or both of the vmdk files didn't download cleanly or when installed there was an issue in the transfer? I know it's a pain, but did you try downloading them again and try to redeploy?

Right in one. I verified the vmdk files and they are not the correct size. Rather than re-download, I built a Server2008 machine that now holds vCenter, Veeam backup, and the vSphere web client.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,609
Location
Horsens, Denmark
Looking more closely at VSA, I can see I'm going to have a couple issues.

1. Due to the internal mirroring feature, I'll need twice (thrice for three hosts?) the drive capacity. I'm not sure whether the "backup" drives can be normal HDDs without impacting performance, whether it expects all the drives to be the same, or whether it supports two tiers of arrays (mirrored SSDs and mirrored HDDs) with different VMs on each.

2. Their best practices recommend that the vCenter VM that is running VSA not be part of the VSA cluster itself, so I need another machine (not that big a deal). It also states that you cannot resize the storage array after initial config, so it better be big enough or you'll need to migrate everything off it and re-build.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Those are good questions/concerns. I'm behind you in knowledge of the VSA, that was new to me. Given that the VSA replicates using the network, I suppose you'll be limited to the network speed in terms of replication performance unles if it does it asynchronously. I did some searching but could not find if all volumes are kept in 100% sync which would imply that your network latency and bandwidth might dictate the fastest performance regardless of how fast your SSD drives are.

The pricing of the VSA looks to be rather expensive. There are several references and comparisons to the HP P4000 VSA as an alternative solution with more features. I have had no experience with it, but figured I'd pass it along in case you aren't happy with the VMware VSA. Looks like HP offers a 60-day trial of it if you want to investigate it as an alternative. Oddly it looks like Amazon sells licenses for it at a massive $5383. Based on some prices I've seen mentioned of the VSA, that might be a deal. There is also another option from a company named StorMagic which also offers a VSA. I had no idea that this market had several offerings.

I also just read the same thing about not having the VSA and vCenter on the same machine so that you don't get stuck in a situation where both are down and both are needed to start each other. That would be a bad situation. With regards to the resizing restriction, you could use the temp license to storage vmotion to something else if you ever needed to resize, but that limitation is kind of annoying. Given that's it's a 1.0 release, maybe they're planning those types of features later?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
Also, if you notice on your Server 2008 setup (assuming R2) that the mouse is slow to respond through the vCenter client, there is a fix for it here. I had this problem with all my Server 2008 R2 instances and it was annoying until I fixed the video driver.
 
Top