Proposed NAS build

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
HSF came yesterday. I got it installed last night. It wasn't quite as tight of a fit as I was expecting around the CPU socket per the dimensions on the website, but there wasn't gobs of clearance either. I put the Asus Xonar DG sound card in the motherboard and made sure it works (ie: plays audio from Foobar2000). I also tested the spare ServeRAID M5016 controller & supercap I bought to make sure it's not a dud. The Supermicro chassis is supposed to come today. Unfortunately, I ordered my SFF-8087 cables to connect the expander to the backplane from China to save a few bucks (~$30 across the 7 cables which doesn't seem so smart now), so I won't be able to get everything working until those come. In the mean time I need to test the various drives I got and make sure they all at least spin up and function.

On a side note I was a little surprised to see that the RAM apparently has thermal sensors in it. They each report a temperature in HWmonitor. I was also surprised to see that no fan speeds were reported.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I decided to run some benchmarks comparing RAID-10 to RAID-6 since there was some discussion on Hardforum about RAID-10 vs. RAID-6 and which is faster. I used six of the 6TB Toshiba 7200RPM enterprise SATA drives connected to the IBM ServeRAID M5016 controller I'm using.

RAID-10:

ATTO style benchmark using IOmeter:
Atowxpe.png


CDM:
D6brh32.png


RAID-6:

ATTO style benchmark using IOmeter:
I0qeQGE.png


CDM:
3YJOu9H.png


So as expected RAID-6 is faster than RAID-10 with the same number of drives
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So my "new other (see details)" Supermicro chassis arrived on Friday (listing). I opened it this afternoon. I discovered it's not new, but a RMA return and some of items shown in the listing are missing, like the front panel breakout cable. I sent a somewhat tersely worded message to the seller. Everything in this picture is missing.

s-l1600.jpg

There's what looks to be hot glue residue on the SFF-8087 connectors:

SFF-8087.jpg

There's marks on the front panel showing it has been mounted before:

front.jpg

There's the RMA sticker:

RMA_sticker.jpg

The missing rubber "fingers" on the cable passthrough:

torn_rubber_fingers.jpg

*sigh*
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
That stinks. The listing is a bit misleading but the text does read as if it's possible to be a factory second and possibly with defects:

"The item may be missing the original packaging, or in the original packaging but not sealed. The item may be a factory second or a new, unused item with defects. See the seller’s listing for full details and description of any imperfections."
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Except the first part says, "...A new, unused item with absolutely no signs of wear..." There are signs of wear. Plus there's no mention of any imperfections in the listing. I could have gotten a refurbished 846E16 chassis for the same price and that has an expander on the backplane.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
So as expected RAID-6 is faster than RAID-10 with the same number of drives
Those look like sequential transfert tests. There's supposed to be a significant hit on random writes because of the parity calculation on RAID 6. I haven't seen much performance comparison between RAID types recently and I admit that my knowledge of this is based on stuff I read more than a decade ago. Things might have changed with more powerful controller chips.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So I wanted to try Online Capacity Expansion to see if it really works, how long it takes, how much it degrades the arrays performance, etc. Specifically, going from a 6x6TB RAID-6 to a 8x6TB RAID-6. I see no way to do it in the MegaRAID Storage Manager software (v 15.03.01.00). Based on some Googling, supposedly you can right click on the virtual disk in the logical tab and pick Advanced Options. That choice doesn't exist for me. I don't know if they changed the MSM software, or if the software thinks the card can't do it, or what exactly is going on.

z8bf58W.png
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Okay, so it looks like the StorCLI migrate command is what I want to use.

The PDF manual says:
storcli /cx/vx start migrate <type=raidlevel> [option=<add | remove> disk=<e1:s1,e2:s2 ...> ]

has this input example:
storcli /c0/v3 start migrate type=r5 option=add disk=e5:s2,e5:s3

My problem is all of my drives have the same enclosure and slot number, so that's presumably insufficient (haven't tried it).
Code:
StorCLI64.exe /c0 /eall /sall show
Drive Information :
=================
--------------------------------------------------------------------------
EID:Slt DID State DG  Size Intf Med SED PI SeSz Model  Sp
--------------------------------------------------------------------------
16:0  17 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  18 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  19 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  20 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  21 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  22 Onln   0 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  23 UGood  - 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
16:0  24 UGood  - 5.456 TB SATA HDD N  N  512B TOSHIBA MG04ACA600E U
--------------------------------------------------------------------------
The help from StorCLI says something slightly different:

storcli /cx/vx start migrate type=raidx [option=add|remove drives=[e:]s|[e:]s-x|[e:]s-x,y] [Force]

disk in the PDF manual is actually drives. I know e is the enclosure ID, and s is the slot, but I don't know what s-x or s-x,y denote or how I can use them to identify the two drives I want to add.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Thanks to someone at STH I now know how to do it from the GUI. You have to right click on the Drive Group two levels up in the "tree", not the virtual drive. Then you pick Modify Drive Group and go from there. It gave me an estimate of 5 Days 2 Hours 50 Minutes for the operation. I ran a quick benchmark of the drive speed during the rebuild and it sucks!

Benchmark:
TDhiLVd.png


I can do a full init on a new RAID-6 array in ~12 hours using these 6TB drives. Copying the data back from a backup over 10GbE will take a lot less than 4.5 days.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
I can do a full init on a new RAID-6 array in ~12 hours using these 6TB drives. Copying the data back from a backup over 10GbE will take a lot less than 4.5 days.

Interesting thread. Way, waaaaaaay, above my skill set.

Just wondering...Any advantage(s), beside the time savings, of backing up, create an all new array and restoring vs just expanding the array? Less likeliness for data loss? Overall throughput increase? Less data fragmentation?

I admit it would be pretty cool to have a home 10GbE setup working, and was actually needed vs just for shits-n-giggles.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Backing up and restoring is better in all ways except that you need to have 200%+ your array capacity online at the same time. Also, when 1GbE was the thing an array backup and restore would have taken longer.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Backing up and restoring is better in all ways except that you need to have 200%+ your array capacity online at the same time.
Yes, but you should basically already have that courtesy of having a backup.

Also, when 1GbE was the thing an array backup and restore would have taken longer.
Yeah, but there are still ways around that, like putting the two RAID controllers in the same computer and effectively doing the copy local. That's how I did the initial copy from my main server to the backup server.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Just wondering...Any advantage(s), beside the time savings, of backing up, create an all new array and restoring vs just expanding the array? Less likeliness for data loss? Overall throughput increase? Less data fragmentation?
Fragmentation isn't an issue in reconstruction as far as I know. Reconstruction is like a rebuild, but worse. It's very hard on the drives. That's why they tell you to make sure you have a backup before you start.

I admit it would be pretty cool to have a home 10GbE setup working, and was actually needed vs just for shits-n-giggles.
I went back and forth on 10GbE. Utimately, I decided to get two NICs and a twinax SFP+ cable, so it's going to be rather basic. One is an Intel (because it has XP 64 drivers) and the other is a dual port Mellanox (because it's PCIe 3.0 and had Windows 10 drivers). Just the two servers will be connected together to start.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
My NAS is my onsite backup copy and my additional backup is in the cloud. Restoring from that is a last-resort and would take forever.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
My NAS is my onsite backup copy and my additional backup is in the cloud. Restoring from that is a last-resort and would take forever.
Waiting 5 days for an OCE and having your server basically out of commission during the process doesn't seem very desirable either, but obviously it's got to be faster than restoring from the cloud.

Based on the less then stellar results of my OCE, I'm thinking to start with more drives than 8. Then again, I don't have a clue what would cause my storage need to balloon so dramatically. I have no intention of changing my current usage patterns.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
Waiting 5 days for an OCE and having your server basically out of commission during the process doesn't seem very desirable either, but obviously it's got to be faster than restoring from the cloud.

Based on the less then stellar results of my OCE, I'm thinking to start with more drives than 8. Then again, I don't have a clue what would cause my storage need to balloon so dramatically. I have no intention of changing my current usage patterns.

I originally started with 8 x 4TB and rebuilt to 12 x 4TB just to be done. I didn't think I'd need it either and now I'm 82% full...time to build NAS #2.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
With all the stuff I've got I can get up to about 18x6TB on the main array. I need 2 ports for another RAID-1 array (2x4TB). I want to put a single 3.5" HDD connected to the motherboard for something else. Using a reverse SFF-8087 to SATA cable will consume 4 ports on the backplane leaving me only 20 connected to the ServeRAID M5016. Hence the 18+2=20.

That will leave me 1 port on the expander to connect to an external JBOD chassis (I think). I'm not 100% sure about how that works since I thought I read you shouldn't stack expanders, but I don't know exactly what's in an external JBOD chassis. I guess I could run only a single link from the M5016 card to the internal expander and use the other M5016 link to the external JBOD chassis.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
I'm interested in going the supermicro chassis route like you did so I can load it up with 6TB drives. I'm not sure yet how I want to do this and haven't had time to look up all the parts.

What are you using the 4TB mirror for?
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I'm interested in going the supermicro chassis route like you did so I can load it up with 6TB drives. I'm not sure yet how I want to do this and haven't had time to look up all the parts.

What are you using the 4TB mirror for?
The really large array is basically just for media files (audio, video, pictures, etc). The 4TB mirror is for storing backup data from other systems in the house, drivers, applications/program installers, etc. I guess I could have put it all on a single large array, but I've currently got them separated onto different arrays in this fashion and I'm trying not to put everything in one basket. The extra non raided 3.5" drive is for storing video footage from my security cameras. There will also be a 2.5" 1TB SSD that acts as the raw storage for my OTA HDTV DVR before I edit the commercials and put them to the large array. 2 SSDs in RAID-1 on the mobo for the OS/system drive.

It's the same basic separation that I currently have in my main server.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I could put the 4TB mirror on the motherboard controller to give me up to 20 drives on the large array in this enclosure... I just can't see running out and buying another 10-12 drives now at another $2900.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Any additional 6TB drives are on hold due to a development well out of my control.

However, I bought a used Norco 4220 on a forum though. It's already got the 120mm fan wall, all Noctua PWM fans (120mm's and 80mm's), and an Intel RES2SV240 SAS Expander and SFF-8087 cabling. That enclosure will hold my backup server which is going to get the 8x2TB RAID-6 and 8x1.5TB RAID-6 arrays that are currently in my main server and backup server respectively.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Well, I changed my mind and ordered 2 more 6TB drives. I will start with 10 of them in RAID-6.

Still waiting on my SFF-8087 cables from China. They sat in Los Angeles for a week doing who knows what. They've since departed LA, but no idea when I will get them. Hopefully in the next few days.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I finally put the motherboard in the chassis and plugged it all together. The longest 8 pin CPU cable just barely reached the socket on the motherboard. It is stretched super tight going under part of the CPU heatsink. I need to get an extension cable for it. It's not quiet. It doesn't scream under PWM control at more moderate duty cycles, but it's by no means quiet. The 3 80mm fans cooling the drives up front I have running at a fixed speed. The CPU fan and rear fans I have on "smart fan". It ramps their PWM based on the CPU temp. I need to find the right speed for the fan wall to keep the drives cool while keeping the noise to a relative minimum. This will require some testing at different speeds.

I have the ten 6TB drives in it as well as the two 4TB drives. I added the 2nd SSD and got the two SSDs into RAID-1 for the OS. I need to tidy up the innards. My reverse SFF-8087 to SATA cable is way too long. I want to get a shorter one. I could also use shorter SFF-8087 to SFF-8087 cables connecting the HP SAS expander to the IBM ServeRAID card. I still have another SSD to add and need to get a lot of different software installed and copy the data to it before I can swap this for it's lower capacity predecessor in the basement.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
I found variable fan speeds more annoying than fixed. I ended up setting the fans to the highest speed that wasn't audible and then added heatsinks and ducting to cool any hot spots.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I found variable fan speeds more annoying than fixed. I ended up setting the fans to the highest speed that wasn't audible and then added heatsinks and ducting to cool any hot spots.
There no way this Supermicro chassis will ever be not audible. It's going in the basement so it doesn't have to be silent, just not scream.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I put the Intel X520-DA1 card in the XP x64 system, and the Mellanox ConnnectX-3 is in the new "server"/NAS. I used a 3m Cisco SPF+ DAC to connect them. I've started the data copy. It's not going as fast I'd hoped. XP x64's SMB and TCP/IP stack don't seem up to handling 10GbE all that well. I guess that shouldn't be all that surprising considering it wasn't so hot at 1GbE either.

I did some quick troubleshooting with iperf3 to see what was going on and rule out the RAID array in the old system. I enabled jumbo frames on both sides, made a few adjustments in the Intel's drivers, but the best I can do is about 5Gbps with iperf3. The CPU load on the E5200 from iperf3 is near 50%. The CPU load on the Xeon E3-1231 v3 was under 4%. The Mellanox card has more HW acceleration (but no XP x64 or Server 2003 x64 drivers), not to mention the Haswell based Xeon is just a bit more powerful.

Ultimately, I'm seeing file copy transfer rates that are bouncing between 200-300MB/sec, which is about 33% better than what I was getting before making the tweaks. However, I was hoping for something closer to 500MB/sec. Oh well... At least I only have to do this once. My understanding is that Win 10 systems can move >1GB/sec over SMB 3.x on 10GbE if the drives on each end can read / write that fast.

Edit: I'm now seeing 300-400MB/sec pretty consistently. File Fragmentation may be affecting the transfer rates some.
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So a Norco 4U chassis with 12 HDDs in it is quite heavy. I can only imagine what one with 24 HDDs is like. It draws around 200W sitting there. If I'm recalling correctly the power draw got up a little over 400W when it powers on (according to my P3 Killawatt). I think I have the RAID card sent to spin up 2 drives every 4 seconds. I need to double check that.

I have it connected to my cheap IOGear PS2 keyboard / mouse KVM. The system sets the monitor to sleep after 10 minutes, and I can't wake it up with the PS2 keyboard or mouse. Not even input via the iKVM will wake it up. The machine is not locked up and I can remote desktop into it. It worked fine with a monitor and USB mouse and keyboard plugged directly into it. I don't know if it's a KVM problem, or a PS2 problem. Though I'm puzzled why the iKVM is also affected. The iKVM could also take it out of monitor sleep previously. I'm thinking to disable monitor sleep and turn the monitor off with the power switch on the front to avoid the issue.

I tried to use FastCopy to copy my data to the system, but threw in the towel on that. It had all sorts of problems and errors about not being able to read certain files, etc, etc, etc. Some of that seemed to be permission issues. Some I couldn't figure out what the issue was. Maybe the path names were too long. I ended up just using File Explorer to copy the data.

I'm also trying to chase down why my EPG data for nextPVR didn't update the guide information overnight. I think it may have been a permissions / user account issue. I moved the executable to another folder and set it to create a log of sorts, so hopefully I will find it worked tomorrow morning, or at least have some additional information why it didn't.
 

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,908
Location
Somewhere in time.
I had weird issues with FastCopy and Teracopy; decided a slower but surer method (Windows Explorer) was preferable to unknown / botched results.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I've used FastCopy before and it has worked okay. Nothing like my experience this time. It made a complete mess of things.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I just wanted to post that this Server/NAS rocks! :D

Editing the commercials from my recording OTA HDTV is awesome. It reads and opens a recording in maybe 15 seconds tops from the 1TB Samsung 850 EVO. I had a SSD for my recordings in the old server, but it wasn't like this. The old system was 3Gbps SATA. This one is 6Gbps. Windows 10's RDP is also much better than XP x64. I'm at work using it remotely and it keeps up with sending me the decoded video much much faster / smoother (for figuring where to cut the commercials). It looks to have much more sophisticated compression algorithms that use much less bandwidth & achieve a higher frame rate. And, then when I save the output it saves to the large RAID-6 array in under 10 seconds entirely out of cache without even accessing the SSD.

I'm so geeked! :cool:
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So I don't understand the logic behind it, but the IBM ServeRAID cards I have for some reason default to running weekly patrol scans of the drives as well as weekly consistency checks. At the same time... The consistency check supersedes and cancels the patrol read. I'm not really sure why consistency checks are needed weekly. They take a really long time. Like 19 hours for the large RAID-6 array and 11.5 hours for the RAID-1 array. And starting at 4am on Saturday morning seemed like a good idea to someone. Which, maybe is a good idea for an office because use would be much less on a weekend, but in my case it's basically idle during the day during the week. With a lot more potential for daytime use on the weekend.

I didn't really notice any performance hit from either. I didn't notice any slow downs and only knew it was happening when I peeked in the console after seeing the activity lights on solid while going to the basement for something and noticing the lights. I adjusted the schedule never the less. Now they both start at midnight during the week instead of 4AM on the weekend and not on the same day. The consistency checks are now monthly instead of weekly too. I've also changed the task rates a little. I'll have to see if they finish outside of my typical use time for data on the arrays.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I'm trying to find ways for building an affordable yet fast and reliable high capacity NAS for my current employer. The easy button is to go with a QNAP rackmount NAS like the TVS-871U-RP, but it is still quite expensive. I'm trying to find an alternative.

I thought about an HPE Proliant ML30 Gen9 #830893-001 with a second 4x LFF bay. Even with a second 460W CS PSU for redundancy, it's still some 800$CDN cheaper than the QNAP NAS. The problem is the cost of HP drives. I know I can use 3rd party drives into an HP server, but I lose the monitoring and I don't know how I'll know when to replace a failing drive.

Also, QNAP's OS is quite mature. I don't know if I can configure something else, preferably free, that will perform just as well and let configure replication/mirroring to a ROBO site.

Lastly, I'd like to use the newly available Seagate ST10000NM0016 10TB NAS drives. They aren't available from HP yet and even if they were, they would cost at least 4 times the price of the standard Seagate model.

Thoughts?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
As you've said, there are great advantages to using vendor drives. IMHO, they are not worth it in any kind of storage rig. If all you are getting is half a dozen SSDs then pay their crazy cost and don't worry. For a storage appliance (10 drives+) the cost gets crazy and not getting 10TB drives at this point is a sacrifice not worth making. Depending on your disk management solution, they should at least tell you that a drive is failing and its serial number. Some advanced planning and labeling while racking the drives will make this information useful in a hot-swap environment.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So, one of the ten Toshiba 6TB enterprise drives was marked as failed by the RAID controller today when data was being written to it.

In the process of replacing the drive with my spare I discovered a problem with my NAS build. There is no way to determine which drive has failed. The locate feature doesn't work with the controller / expander / chassis combo that I have. Additionally it doesn't distinguish port number or location in the GUI. They all have the exact same thing shown. I thought I was going to have to record the SNs of all the working drives and then find the one that wasn't in the list. However, after a power-off & reboot the failed drive reported its serial number in the GUI. So, I wrote that down and then powered off, found it, and replaced it. The rebuild is in progress and the GUI is reporting about a 10hr rebuild time.

I've ordered another identical Toshiba 6TB drive online so I have a spare again. I will also attempt to diagnose the failed drive and see exactly what state it is in when connected to a standard SATA controller to see if I can RMA it or not.
 
Top