Microsoft Storage Spaces vs. RAID Level

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
My first assumption was that there was some equivalency here; with "No Resiliency" meaning RAID0, and the other options stepping through RAID5, RAID6, and RAID1. This was further reinforced when I added a 4th drive to my "no resiliency" pool; the OS asked if I wanted to optimize drive usage and proceeded to balance utilization across all the drive over the course of many hours.

However, performance of the pool is poor:

4.png

I know that a single one of these drives can do much better.


And here is a Windows SoftRAID Stripe of the same drives:
5.png
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
So the upside is the ability to add and remove drives from the array, and to specify arbitrary logical capacities. The downside is crap performance? That is unfortunate. Time to sync off all the changes and go back to a proper RAID.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
There's huge value to the up side, particularly since it's not hardware dependent and allows me to cobble together massive single-machine setups if I want them (which I do). It's also much easier to deal with than ZFS, which is a massive pain to in-place expand and has huge RAM requirements. You can also improve the array with the addition of an SSD tier for caching.

You can do better by playing with SSD caching (using more than one drive per array helps a lot) and digging into powershell options outside the Server Manager Wizard, plus turning on the flag for UPS availability. I see 600MB/sec reads and 300MB/sec writes out of my not-very-well-tuned storage spaces, which are fast enough for my application.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
Thanks for that. Those benefits will absolutely be worthwhile on my tier-2 storage (the array with redundancy and more spindles of lower capacity to fend off rebuild errors).
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Here's the CrystalDiskMark report from the data partition I made using Storage spaces on our file server :

Fileserver-CrystalDiskMark.PNG

It's a combined 2x Intel DC S3500 400GB RAID 1 plus 6x Seagate ST1200MM0017 RAID 1 array, with auto-tiering. It isn't bad, but I hoped for better than that.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Storage Spaces don't really start to drag until you start dealing with parity checking. This should not be news to anyone here.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
I've got some SSDs coming to play with the tiering capabilities. I suspect that with even only 400GB of SSD in front performance of the HDD system becomes nearly irrelevant.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
It's not quite that straightforward, since the SSD(s) aren't quite just memory cache sitting in top of the array like you might think. What helps more is looking at 1 vs. 2 vs. 4 drives per space. In practice I've found that two (120GB, usually) drives is a happy medium for an 8 drive array but that's based on the hardware I have sitting around rather than ideal "I can get whatever I want" ddrueding hardware. Having extra SSD interfaces seems to matter more than the SSD capacity.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
What's the advantage(s) of a huge RAID array for a -PERSONAL- server vs a simple:

6TB C: drive
6TB D: drive
6TB E: drive
6TB F: drive

etc...setup?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
...Plus you'd need the other drive letters for all the things that are porn.

I will say that I don't have 5TB of music and if I segregate episodic tv, movies and "other", no single one of those categories is more than 8TB, but I'll exceed that number in a few months.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
Not having to partition your data into 6TB chunks. My movies, videos, and photos folders are all larger than 6TB.

Figured that was the main reason, just asking for confirmation. Plus setting up a RAID array is more geeky fun that just organizing data into manageable chunks to fit a basic C:, D:, E:, F:, etc setup.

...Plus you'd need the other drive letters for all the things that are porn.

I will say that I don't have 5TB of music and if I segregate episodic tv, movies and "other", no single one of those categories is more than 8TB, but I'll exceed that number in a few months.

/facepalm
How, oh how, could I have forgotten about the porns. LOL.

It would be interesting to see a graph of "hard drive size" VS "storage requirements" to see if those line will ever cross. If someone shipped a 20TB* drive tomorrow would a RAID array still be worth the risk/effort? 100TB*? 500TB*?

Will HDDs/SSDs/"storage devices" sizes ever be large enough to remove the need for Redundant Array of INEXPENSIVE Disks for -HOME- use?



*assuming such a large drive is more reliable than a RAID array.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Wow, that's an awesome price! The lowest I can get the same system here is ~1585U$.

I've only worked with the T140 and T440 series of Lenovo servers, but Ddrueding has experience with the RD650, which is the 2U equivalent to the 1U RD550. I hope for you that Lenovo hasn't cheapen up the top panel of the top panel of the RD550 the same way HP did with the Proliant DL360 Gen 9.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Down sides: the RD550 doesn't have an internal NIC. The Ethernet port is for management... and that unit also doesn't have the KVM over IP module included. It also uses Displayport-only for display and a little reading online says non-Lenovo cheapies almost never work.

I usually shy away from 1U systems but I think I'm going to have to make an exception.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
You've probably found it, but this PDF list most available options for the RD550, along with the part numbers.

The 4-port gigabit NIC for this server is the 4XC0F28740 and it's worth 90$.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
Figured that was the main reason, just asking for confirmation. Plus setting up a RAID array is more geeky fun that just organizing data into manageable chunks to fit a basic C:, D:, E:, F:, etc setup.



/facepalm
How, oh how, could I have forgotten about the porns. LOL.

It would be interesting to see a graph of "hard drive size" VS "storage requirements" to see if those line will ever cross. If someone shipped a 20TB* drive tomorrow would a RAID array still be worth the risk/effort? 100TB*? 500TB*?

Will HDDs/SSDs/"storage devices" sizes ever be large enough to remove the need for Redundant Array of INEXPENSIVE Disks for -HOME- use?



*assuming such a large drive is more reliable than a RAID array.

Contiguous space is one requirement but there is also performance, availability/durability, and cost per GB. Individual HDD performance is not scaling linearly with capacity so there are advantages to spreading the IO among multiple drives and also spreading out the risk by way of mirrors and/or parity devices at the expense of some space and sometimes performance. If the drive was a single 20TB SSD (or performance equivalent) in a good $/GB ratio then perhaps three requirements would be met keeping me from possibly needing an array.

When I think back to my first gigabyte drive I remember being wow'ed at the size and scared of the potential for data loss. Today I can write a single file much larger than 1GB in such relatively little time and not think twice about it. Maybe one day we'll see the same for a 1TB file and not think twice about storing it but I still feel HDD performance will never be in that realm unless we get some miracle breakthrough. Because it takes so long to manage 1TB of data it's still a concern which warrants drive redundancy and extra protection from failure while the data is being protected some how.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
RD650 is a nice unit. The management was an extra, and the DP only would be a pain but I've done without KVM for a few years now (monitor, keyboard, mouse, tools on a stand-hotplug as needed). The 650 does come with an onboard NIC, but unless it is 10GbE it will never be enough these days.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
Because it takes so long to manage 1TB of data it's still a concern which warrants drive redundancy and extra protection from failure while the data is being protected some how.

Thanks for the reply. Makes sense that once you get into TBs of data, reliability becomes even more of a concern. Losing one drive in an array is bad, but at least you have a chance for recovery vs just your one big drive dying.

Of course RAID isn't backup.

Perhaps it's my total non-use of computers, outside web/internet usage, that I see a RAID array as just a lot of effort to setup and maintain vs "ease of use" of a C:, D:, E:, F: setup.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
After the drives are physically installed, it takes about 90 seconds to set up an array. Then (depending on RAID level and controller), waiting several hours for the array to build the first time. Really not hard at all.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Interesting that Lenovo claims that their 1U RD550 has a typical operating noise level of just 35 decibels. If that's true, it's witchcraft.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
Finally got the SSD I'm going to try as the cache for the Storage Space. As mentioned above, it looks like that kind of space can only be set up in a server OS. Downloading Server 2016 TP4 @ 8MB/s right now. Current plan is as follows:

Install setup files on USB stick.
Disconnect current Win10 OS drives.
Connect USB stick that is the target for install (a Corsair Voyager GTX 256GB, as discussed elsewhere)
Install Server 2016 TP4 from USB3 stick to Voyager
Configure Storage Space using 4x 8TB drives and SSD as cache drive
Shutdown, Disconnect USBs, Reconnect Win10 M.2 SSDs, Boot
Import Storage Space into Win10
Hope I can manage the fancy features from inside Win10 and not have to keep the Voyager just for this purpose. If so, I may consider running a Server OS as my desktop again. Time to start researching all my programs to make sure they'll work...
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I would doubt that number. An RD650 at 10% load and a 68F ambient is double that.

It's not as loud as I thought it would be but it's definitely not quiet once the fans start. It is by far the least loud 1U server I've worked with. It's also a lower pitched noise, not nearly so aggravating as the SuperMicro/Intel stuff I usually deal with. Bigger problem right now is that there doesn't seem to be a way for me to get a display out of one. I have a Lenovo branded Displayport to VGA adapter and that's not giving me video, nor is the simple expedient of just plugging in a video card.
I guess I'll have to order the KVM modules after all.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
Not sure on that one. I just plugged in the Dell 24" I have on the stand (direct via DP) and it just worked (after an age of pre-post whirring and whatnot). IIRC, it may have taken 5+ minutes to get video the first time?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Not sure on that one. I just plugged in the Dell 24" I have on the stand (direct via DP) and it just worked (after an age of pre-post whirring and whatnot). IIRC, it may have taken 5+ minutes to get video the first time?

Well it's been plugged in (with a branded Lenovo DP to VGA adapter - I'm not going to have access to displayport-anything in my datacenter) for the last six hours.
I'm actually a bit more concerned that I don't get video off a PCIe video card, but I can't hazard a guess what the firmware thinks it should be doing on initial startup at this point, either.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
I wouldn't be surprised if some BIOS default only allows storage adapters in any expansion slots? Does the system depend on a CPU-based GPU? Does the CPU you put in have on-board video? I don't think I tried a VGA adapter, but I can check it out Monday.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
He bought a server kit and the CPU was put in by the manufacturer. The CPU (E5-2630 v3) doesn't have integrated graphics, as it's the case with all Xeon E5. The graphics controller is an ASPEED AST2400, a very common controller use in server motherboards. There is no reason it shouldn't output video on startup.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
It appears that the first one I opened is just bad hardware. Machine #2 behaves like a computer that works.
Lenovo also didn't ship me any drive trays. They're $25 apiece. Jerks.
 

CougTek

Serial computer killer
Joined
Jan 21, 2002
Messages
8,724
Location
Québec, Québec
Lenovo also didn't ship me any drive trays. They're $25 apiece. Jerks.
That's standard for HP, Dell, Lenovo servers. Only Taiwanese server manufacturers (SuperMicro, Asus, GigaByte, etc) ship with the drive trays included, at least to my knowledge.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I've bought IBM and Dell machines in the past that had the trays, but they were full preconfigured systems rather than barebones. Intel always gives me the trays, too.
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
It appears that the first one I opened is just bad hardware. Machine #2 behaves like a computer that works.
Lenovo also didn't ship me any drive trays. They're $25 apiece. Jerks.

This is my experience with Lenovo as well, even if you order a preconfigured system, they only include usable trays for the drives you ordered. The LA company I now order servers through fill the thing with useful trays no charge (after I asked).
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
My main home workstation is now running Windows Server 2016 Technical Preview 4. I haven't been this reckless in years.

Anyway, while trying to create the tiered storage I'm after, I receive an error every time I try to create the virtual disk:
SS Error.png

This happens regardless of the volume size, parity settings, or cache settings. Even making the drive half the max capacity, choosing no redundancy at all on either tier, and disabling write cache doesn't make this go away.

Has anyone else run into this?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
That's a new one, but I've not tried Server 2016 yet. You might find that it's easier to work with Powershell than doing things through the GUI. The Server Management UI is weirdly half-baked for making Storage Pools.
 
Top