Help to chose NAS... Please

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
Hello all,

I really need some help with the selection of a NAS/SAN.

Here is a short list of the key things I can think of.

* Going to be using VMWare
* Need a backup system for the entire NAS/SAN
* Status vis iPhone
* SNMP

First of all I understand that the GBIT link is a bottle neck. I intend in the future to add a 10Gbit to a switch.

But I have other concerns as well. I really need help is selecting a NAS/SAN THese are the ones I have looked at.

I have about 3000 Euros to spend and these are the systems I have looked at, if there are other recommendations, please help. I will in the future buy a 10Gbit card and a switch with 10Gbit link and use 1GBIT RJ45 to the servers.

QNAP - TS-809U-RP
QNAP - TS-879U-RP
QNAP - TS-EC879U-RP

SYNOLOGY - RS3412RPxs
SYNOLOGY - RS2211RP+

I like the idea with the expansion of the SYNOLOGY.

My frustration, I have asked this question before not as in-depth. I understand the importance of IOPS, but people come back to me and say, you need to know how many iops you need etc etc.

Now I do understand that this is important, of course. However I would like some really result from people who have the above or similar SAN/NAS.

So now we come to the part when I state what I am going to use it for.

* I want to use VMWARE with multiple VM machines (HOW MANY CAN I HAVE)
* Going to use it to test software, build systems, etc
* Back the ENTIRE system on to an external storage device, so I can recover everything.
* Will have a portion of the system for a production server and once the server become to busy then I will move it to a dedicated server in the cloud (rented server)
* Use to store videos (Is it possible to edit from the SAN/NAS directly???)
* Storing all my work, etc, basic stuff
* ISCSI for VMWARE is ESSENTIAL!!!!!!!!!!!!!!!!!!!!!!

Reading reviews on the Internet have confused the hell out of me because some people say SYNOLOGY and other QNAP etc.

But what I really want is someone to say, "Hi. I use a ****** and I have 3 ESIX 5 servers with over 30VM machines, they are partial busy and it works well, however I did need to upgrade the RAM bla bla bla".

So please can anyone help.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,050
Location
I am omnipresent
Synology gives you a whole bunch of neat addons and toys but the little four and six drive ones I've set up don't really scream "expansion" to me. I've fed ESXi with a FreeNAS and found the performance kind of weak over just plain Gbe, but there are better folks here for talking about that issue.

The main thing I want to say though, is that A NAS IS NOT A BACKUP. IT IS A PLACE TO STORE THINGS THAT DOES NOT ELIMINATE THE NEED FOR ADDITIONAL BACKUP PROVISIONS. You would do well to keep that in mind.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,675
Location
Horsens, Denmark
I did a bunch of research on these and the Synology devices looked nice, but I didn't get one. The performance loss compared to DAS SSDs was too great. I'll also second Merc's main comment. No matter what, having your data at one location isn't a backup. Accidental deletion or virus or corruption or all kinds of other things would still put you in a world of hurt.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,050
Location
I am omnipresent
Synology doesn't have any special magic to make its bits go faster than Gigabit Ethernet line speed. If you're going to be hosting multiple VMs over one, you'll really need to pay attention to the underlying disk configuration. You'll need to look at a RAID1 or RAID10 setup to have any kind of sane level of redundancy and you'll also want to pay attention to how your VM files are distributed on the actual spindles in your storage volume. This might go without saying, but I've certainly heard of people complaining about how slow VM performance is over a NAS without even considering the fact that the NAS's drives are configured for (very slow) RAID5 storage.
 

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
Synology gives you a whole bunch of neat addons and toys but the little four and six drive ones I've set up don't really scream "expansion" to me. I've fed ESXi with a FreeNAS and found the performance kind of weak over just plain Gbe, but there are better folks here for talking about that issue.

The main thing I want to say though, is that A NAS IS NOT A BACKUP. IT IS A PLACE TO STORE THINGS THAT DOES NOT ELIMINATE THE NEED FOR ADDITIONAL BACKUP PROVISIONS. You would do well to keep that in mind.

Indeed you are correct about the backup, however I intend to have an extra external device to backup the NAS as well, I assume this is possible. But can you make a backup of everything, as well as my files, but all the system files and settings of the NAS.
 

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
I did a bunch of research on these and the Synology devices looked nice, but I didn't get one. The performance loss compared to DAS SSDs was too great. I'll also second Merc's main comment. No matter what, having your data at one location isn't a backup. Accidental deletion or virus or corruption or all kinds of other things would still put you in a world of hurt.

You are right that DAS will yield better performance, however having everything in one place does not assure backups of course, however it does allow you to make a backup of ONE place and remember that this is a good solution for vmware for HA.
 

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
Synology doesn't have any special magic to make its bits go faster than Gigabit Ethernet line speed. If you're going to be hosting multiple VMs over one, you'll really need to pay attention to the underlying disk configuration. You'll need to look at a RAID1 or RAID10 setup to have any kind of sane level of redundancy and you'll also want to pay attention to how your VM files are distributed on the actual spindles in your storage volume. This might go without saying, but I've certainly heard of people complaining about how slow VM performance is over a NAS without even considering the fact that the NAS's drives are configured for (very slow) RAID5 storage.


Indeed you are correct, my idea was at first 10 bay x 2tb in Raid 5, 6 or 10. - YTD

But I still need to know how many VMS i can have. Roughly.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,050
Location
I am omnipresent
No one is answering that because there is no hard answer. The answer will most definitely be lower for disks in RAID5 or 6 than 10, though.
 

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
No one is answering that because there is no hard answer. The answer will most definitely be lower for disks in RAID5 or 6 than 10, though.

Hi Mercutio,

I beg to differ, so we know that having the following is a good idea:

  • Having a fiber backbone is a good idea
  • Having raid is a good idea

But there must be some user who have one of the above products with ISCSI for vmware. These people can state how many ESXI hosts they have with how many VM and there average workload.

Thanks
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,050
Location
I am omnipresent
Having redundancy is important. Having a specific RAID level is decidedly less so. Parity calculations will kill your IO performance.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,343
Location
Gold Coast Hinterland, Australia
I haven't used any of the above technologies personally, but have built a Solaris 10 server to act at a NAS providing iSCSI to several hosts. (Underlying filesystem was ZFS with a single SSD for L2ARC, and 4x 10K 2.5" SAS drives in mirror - mirroring was provided by Solaris, not by a RAID Controller). LAN was GigE with Jumbo Frames enabled. All clients were Solaris 10 or Linux (RHEL), however most clients had base systems on local HDDs, with user hosted data was over iSCSI. (They were all on the same LAN segment, so could have used NFS, but was playing with iSCSI instead, as the client was indicating remote offices to connect to the server).

Backup was to LTO tape connected directly to the sever. In most cases I was able to max out the GigE connection in throughput, but the bottleneck in the above setup was lack of RAM in the Sol10 server for caching which in cases limited IOPS. (Hardware was a HP ML350G6 server, w/dual Xeon and 16GB RAM). The above server worked fine for the 50-odd hosts that was connected. (35 dev workstations, 15 servers (5 internal services, 5 test servers, 5 production/client replication servers)).

Will it work with VMWare, most likely, as iSCSI is iSCSI is iSCSI...

How many hosts will it support, as many hosts as the ESX server can handle. However what is your expected performance for each guest OS on ESX? Are you happy with high latency? (Not good for editing large files)... (the more clients, and the larger load, the more latency waiting for data). If your VMs are mostly running 'batch' processing, like software builds, unit testing, etc, then the limiting factor may be your build environment and it's high demand on IO. (the system crawled at night during the nightly build process, but that was because they had multiple builds and automated unit testing running concurrently covering roughly 4GB of source code), but during the day, it was fine (close to local system performance).

The thing is, the above setup is far more flexible (in that you can add as much storage to the server as you want and have it all exported via iSCSI) than anything 'turn-key' based, however will also cost a lot more... (Oracle Solaris 10/11 support contract is roughly US$1000 per year alone, and the server configuration was roughly AU$2000 at the time minus the LTO drive).

Hope this helps... (at least give you alternative - remember just about every UNIX system has the ability to act as an iSCSI target, so you could build your own NAS appliance - you just need to do the math yourself).

PS. I am insane, so feel free to ignore this post.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,862
Location
USA
Indeed you are correct, my idea was at first 10 bay x 2tb in Raid 5, 6 or 10. - YTD

But I still need to know how many VMS i can have. Roughly.

You may find that having a single LUN that is in any of the above RAID configurations may not be the best way to separate IO to your VMs. What I mean is, you may be better served to have two or more LUNs than one giant one. Also, depending on which version of ESXi, you'll need to be at version 5.x in order to support a single LUN greater than 2TB. There are ways to make it work in 4.x, but it's not worth it IMHO.

No one is answering that because there is no hard answer. The answer will most definitely be lower for disks in RAID5 or 6 than 10, though.

Like Mercutio said, there is no hard number one can provide. I've found that you'll need to do your own testing because your VMs will provide a different workload pattern than any of mine. None the less, IOps really is king in a virtualized environment. I've seen numerous times where different 7200RPM SATA drive configurations in our SAN do not keep up with the faster 15K RPM FC drive configurations, regardless of RAID type.


Hi Mercutio,

I beg to differ, so we know that having the following is a good idea:

  • Having a fiber backbone is a good idea
  • Having raid is a good idea

But there must be some user who have one of the above products with ISCSI for vmware. These people can state how many ESXI hosts they have with how many VM and there average workload.

Thanks

The two items on your list can be good ideas if that's what your infrastructure and budget support. I'm learning now more in favor of what Cisco is doing with their UCS setups where they are rolling out 10Gb to everything and then supporting FCoE to give you the best of both worlds, but there is a significant cost difference between this and a basic iSCSI configuration. This configuration lends itself to better management of networking resources rather than having to have both dedicated FC cabling and network cabling. As for having a RAID-configured LUN, that's only to appeal to your needs of availability in relation to cost. Only you can figure out your cost of have downtime (assuming proper backups are in place).

I do wish there was an easier answer to give you regarding your considerations of using a SYNOLOGY or QNAP device. The difficult answer will be that you may need to do a trial in your own environment. See if either of those companies or the distributor you go through offers a return. Then bring the device in-house and give it a try.

Also, if you're considering going iSCSI and uptime is important, you may also want to consider a dual or quad NIC configuration to aid in redundancy. You may also have the possibility to do some NIC teaming to help performance depending on your network switches.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,675
Location
Horsens, Denmark
So to sum up, I could configure a couple dozen VMs that would happily run off a single HDD on a GbE connection (DCs, DNS, light web server, etc). I could also configure a single VM that would die at the end of a GbE connection (SQL or Oracle DB with lots of IO)
 

authorleon

What is this storage?
Joined
Mar 19, 2012
Messages
6
You may find that having a single LUN that is in any of the above RAID configurations may not be the best way to separate IO to your VMs. What I mean is, you may be better served to have two or more LUNs than one giant one. Also, depending on which version of ESXi, you'll need to be at version 5.x in order to support a single LUN greater than 2TB. There are ways to make it work in 4.x, but it's not worth it IMHO.



Like Mercutio said, there is no hard number one can provide. I've found that you'll need to do your own testing because your VMs will provide a different workload pattern than any of mine. None the less, IOps really is king in a virtualized environment. I've seen numerous times where different 7200RPM SATA drive configurations in our SAN do not keep up with the faster 15K RPM FC drive configurations, regardless of RAID type.




The two items on your list can be good ideas if that's what your infrastructure and budget support. I'm learning now more in favor of what Cisco is doing with their UCS setups where they are rolling out 10Gb to everything and then supporting FCoE to give you the best of both worlds, but there is a significant cost difference between this and a basic iSCSI configuration. This configuration lends itself to better management of networking resources rather than having to have both dedicated FC cabling and network cabling. As for having a RAID-configured LUN, that's only to appeal to your needs of availability in relation to cost. Only you can figure out your cost of have downtime (assuming proper backups are in place).

I do wish there was an easier answer to give you regarding your considerations of using a SYNOLOGY or QNAP device. The difficult answer will be that you may need to do a trial in your own environment. See if either of those companies or the distributor you go through offers a return. Then bring the device in-house and give it a try.

Also, if you're considering going iSCSI and uptime is important, you may also want to consider a dual or quad NIC configuration to aid in redundancy. You may also have the possibility to do some NIC teaming to help performance depending on your network switches.

Thank you for the iNFO... This is great :).....

I think to address the redundancey factor, the Thecus N12000 is the way to go as it support HA which is great. I am not ware of QNAP or SYNOLOGY support this.

Your statement about trying is key and I do appropriate this, so I think it would be fair to evaluate the devices based on function and features. Qnap, Synology and Thecus support VMWARE so now it come down to HW specs and software features.

Thecus N12000 - http://www.thecus.com/product.php?PROD_ID=44
Synology RS3412RPxs - http://www.synology.com/products/spec.php?product_name=RS3412xs&lang=enu#p_submenu
Qnap TS-EC1279U-RP - http://www.qnap.com/pro_detail_hardware.asp?p_id=204

So the question is which one would you pick.

Key aspect I have noticed with the above models

Thecus - HA & Vmware certified for esxi 4.1 (Good CPU and 8 GB ram)
Synology - Massive expansion, Max CPU is only dual core (10bays as well)
Qnap - Fastest CPU, 4gb ram, can upgrade

Am I correct in saying none of the above have HW cache.

Thanks
 
Top