Proposed NAS build

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#41
First two Toshiba 6TB drives are here. Wiredzone packs them very nicely. They're plugged in and are working without any issue. I made a RAID-1 array of the two drives (connected through the expander) and am running my Super ATTO IOmeter test pattern right now.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#42
Better than I was expecting for a two drive RAID-1 array. Both drives connected to the HP SAS expander (limiting them to 3Gbps) Direct IO, disk cache disabled, write back and read ahead cache on the controller were enabled.

6TB Toshiba RAID-1 SA.png

I wasn't anticipating pretty much consistent performance from the drives all the way down to .5k block size access. I don't know if the M5016 SAS RAID controller is responsible by grouping reads and writes, if the 512byte sector emulation on the drives gives a bump with very small block access size (this would seem counter-intuitive to me), or if this is just how modern enterprise drives perform. The runs were 5 minutes for each block size with a 30 second ramp, so the 1gB cache on the SAS RAID controller can't be responsible for the performance since it exhausted in the first few seconds of the ramp. I want to test a single drive on the Intel controller on the chipset later and see how it performs as a comparison point.

But, I'm confident now that the drives are compatible with the SAS controller and SAS expander so I've gone ahead and ordered another 7 of the 6TB Toshiba drives to make an 8 drive RAID-6 array with a spare. :mrgrn:
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#43
An actual Atto run.

Atto_6TB_Toshiba_R1.png

Of course the results are totally bogus because the datasize is smaller than the amount of cache on the RAID card, but the results still put a smile on my face. :mrgrn:
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#45
I've been using FIO for doing performance testing on storage. You might get more credible results with it when tuned properly. There are builds for windows here:
http://www.bluestop.org/fio/
More credible than IOmeter? Is there something wrong with IOmeter? I've never heard of FIO. Unfortunately, the website is pretty much devoid of any information about it. I looked at the help file, it looks versatile, but I'm not sure where it fits compared to IOmeter.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
12,852
Location
USA
#46
More credible than IOmeter? Is there something wrong with IOmeter? I've never heard of FIO. Unfortunately, the website is pretty much devoid of any information about it. I looked at the help file, it looks versatile, but I'm not sure where it fits compared to IOmeter.
You last posted a picture from ATTO, not IOmeter? Anyway, yes, I use FIO over iometer. IO meter is pretty dilapidated especially for Linux. The link I sent you was simply for windows builds. The project is on github and I believe the original developers were from Fusion IO and the Linux IO stack. If you ever need to test synchronous IO, I don't know of a way to do it in IOmeter. You can thereby avoid the caching effect of the OS.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#47
You last posted a picture from ATTO, not IOmeter?
Yes, but I presume you saw the post directly above it where I used IOmeter to do the equivalent benchmark and understood from the comment in post 43 that the ATTO benchmark was for lulz.

Anyway, yes, I use FIO over iometer. IO meter is pretty dilapidated especially for Linux. The link I sent you was simply for windows builds. The project is on github and I believe the original developers were from Fusion IO and the Linux IO stack. If you ever need to test synchronous IO, I don't know of a way to do it in IOmeter. You can thereby avoid the caching effect of the OS.
Do you have some example job files that you use that you'd be willing to share / post?
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#48
One more bogus benchmark.

CDM_6TB_Toshiba_R1.png

I'm currently running my IOmeter "super ATTO" pattern on a single drive connected to the Intel SATA controller on the motherboard (Z87 chipset). It definitely is not giving the flat performance that two of the drives in RAID-1 on the M5016 card did.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
12,852
Location
USA
#49
Yes, but I presume you saw the post directly above it where I used IOmeter to do the equivalent benchmark and understood from the comment in post 43 that the ATTO benchmark was for lulz.


Do you have some example job files that you use that you'd be willing to share / post?
I do have some config examples I'll post. Keep in mind they're geared for the Linux IO engine.

Here are the options in a command line format:
fio --ioengine=libaio --filename=/dev/drbd1 --bs=64k --rw=randwrite --numjobs=1 --iodepth=8 --runtime=1800 --time_based=1 --sync=1 --direct=1 --name=rand-write64k-1workers-8iodepth --output=test-result.json --output-format=json
fio --ioengine=libaio --filename=/dev/drbd1 --bs=64k --rw=randwrite --numjobs=1 --iodepth=8 --runtime=1800 --time_based=1 --sync=1 --direct=1 --name=rand-write64k-1workers-8iodepth --output=test.log --minimal


I'd have to check which ioengine makes sense under windows. I can look up some examples to figure this out shortly.

--bs: block size you want to test
--rw: the read and/or write method you want to use.
--numjobs: The number of spawned/forked threads/processes
--iodepth: The depth of IO submitted
--runtime: how long to run in seconds (there is also options to run to a specific size if desired vs runtime)
--time_based: tells fio to run using time versus disk space usage
--sync: guarantees that the call will not return before all data has been transferred to the disk
--direct: promises that the kernel will avoid copying data from user space to kernel space, and will instead write it directly via DMA
--name: the name of the test being run
--output: The location of the file for the test run data
--minimal: only print out the raw results and not the analytics
--output-format: allows you to have the data turned into JSON which is easier to read compared to the CSV format.

Or you can use a config file like this. You would run ./fio config-file-name.fio
Note that "stonewall" means the test will halt there until complete in case you wanted to define multiple tests in the same file.

[global]
ioengine=libaio
iodepth=8
time_based=1
runtime=1800
sync=1
direct=1

[rand-write64k-1workers-8iodepth]
filename=/dev/zvol/perf_benchmark/perfvol
bs=64k
name=rand-write64k-1workers-8iodepth
rw=randwrite
numjobs=1
stonewall
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
12,852
Location
USA
#51
I'll be curious to see what you get out of iometer once you have your array built up with all the drives. Fio may just be better suited to linux than windows. My guess for configuring the drive would be to just paste in the path of the drive and a file name as part of the "filename=g:\testfile.out".
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#52
I'm currently running my IOmeter "super ATTO" pattern on a single drive connected to the Intel SATA controller on the motherboard (Z87 chipset). It definitely is not giving the flat performance that two of the drives in RAID-1 on the M5016 card did.
Here are the results on the Intel Controller:

6TB Toshiba Intel SA.png

And the RAID-1 results again:

View attachment 1017

I don't know the exact reason the drives on the RAID card do so much better at 8K blocks and smaller. Perhaps the controller is combining/grouping the requests and actually accessing the drive in larger blocks.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#53
So apparently I didn't do my homework very well. I had been planning to use an i5-4590 for my server. I even bought one from Microcenter. Thankfully it's still sealed in the box. I didn't realize this before but it turns out the Intel disabled ECC in the i5's and above. The Celerons, Pentiums, and i3's all have ECC, but I guess they want to force you to buy a Xeon if you want 4 cores (or more). I didn't buy a server motherboard and 32gB of ECC DDR3 for nothing...

I guess the i5 is going back and I'll reconsider an i3 or jumping to a Xeon. :cursin:

Edit: If I'm going Xeon the E3-1231v3 looks like the no-brainer choice from the family.
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#54
I tried the two Toshiba drives in RAID-1 on the Intel controller in this Z87 based system. I tried two different caching settings the not so safe Write Back and the safer Write Through. They perform much better on the LSI based M5016 controller.

Intel RAID-1 Write Through:
6TB Toshiba Intel R1 WT SA.png

Intel RAID-1 Write Back:
6TB Toshiba Intel R1 WB SA.png

ServeRAID M5016 RAID-1 Drive Cache Enabled / Direct IO / Write Back & Read Ahead:
6TB Toshiba RAID-1 DIO DCE SA.png

The CPU usage was also noticeably better with the ServeRAID M5016. For example, the Intel 256k write used 3.97% CPU. The M5016 used .37%. At .5k read it was 24.37% vs. 13.61%.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#55
Edit: If I'm going Xeon the E3-1231v3 looks like the no-brainer choice from the family.
The i5-4590 has been returned and a E3-1231 v3 is now sitting in my car. My ECC DDR3-1600L should arrive today so I will have enough to start my build and testing. I'm going to start on the RAM with memtest86+.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#56
I ran about 25 hours of Memtest86+ without any issues. I would have run it longer, but I wanted to get started on Windows 7 and test the Pro key I bought. What a mess that turned into. :(

It felt a little strange to have order a sound card for my server since it lacks any sort of onboard sound device. I can't remember when I last had a motherboard without onboard sound. Workstation / server boards I guess. I use my "NAS" to play music when I working in the basement hence the need for a sound card. I got a PCI sound card, an Asus Xonar DG. I wanted optical out for electrical isolation of the stereo and the NAS, I wasn't about to give up a PCIe slot, the motherboard has PCI slots, and I wanted to make sure there were Windows 10 drivers, so for $20 after $10 rebate it looked like a decent choice.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#58
Sounds like it has been a pain. Linux man, Linux! :razz:
I'll admit I was thinking about it. I found a thread on Hardforum where other people were having the same problem I was. Based on the posts the single KB3102810 seems to be the fix, which is what's currently running installing updates. So, fingers crossed...

Why not just do one of those USB sound cards from Behringer UCA202 or something? It also has optical out.
I've held one in my hands before and was less than thrilled with it. Also, the ASRock Rack motherboard has very few USB ports. It only has 4 external USB ports in the main I/O plate. Two USB 3.0 and two USB 2.0. There are headers on the motherboard for more of both, as well as another USB 2.0 port on the motherboard that will sit inside the case internal to the system. It has a PS2 keyboard and mouse though, which is good since that's what my cheapo KVM uses. I haven't tried the iKVM or IPMI functionality yet.

What about the IcyDock 5 in 3's? :poke: I should have 7 more 6TB drives in my hands today.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#64
I've never seen a graphics score so low. Feel the ASPEED AST2300 power!
Well, it's the onboard graphics of a server class board. It isn't intended for anything more than checking in on the system, not actual use.

I also noticed the OS/HW doesn't seem to support sleep/standby/S3 either, but I need to double check that. I disabled hibernation support and got back 32gB from my SSD when hiberfile.sys vanished. I do hope it supports turbo and all the dynamic clocking, otherwise the system is going to be a power hog.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#66
Yeah, I was wondering about 10GbE last night and this morning. Specifically adding it via a PCIe card to both new NAS/server and my backup server. Then I got to thinking about having a 10 gig link between the gigabit switch and the new NAS/server as well. Nothing like looking into it after you buy the motherboard.

Edit: From a quick search it seems the cheapest gigabit switch with at least one 10gig port is an $800 Cisco. It has 4 10 gig ports and 24 gigabit ports. That's too rich for my blood right now.

Edit2: Newegg's search either sucks or their product offering it limited. You can get a 24port gigabit switch with 2 10 gig SFP+ ports for under $300 from Mikrotik.
 
Last edited:
Joined
Feb 4, 2002
Messages
19,277
Location
Monterey, CA
#67
I've already deployed a number of Netgear 10GbE equipment. Specifically the 8 and 12 port versions combined with Intel X540T2. These costs I can handle, but upgrading my perfectly good Synology units to ones with 10GbE was a bridge too far. Synology will serve as backups and can sync all night.

My current plan is to build a server that can handle NAS duties as well as be my network render farm for Solidworks and PhotoScan in addition to running direct connected VMs that will serve up the rest of the computers in the house (Wife, Kid, Guest) using UnRaid.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#68
I did some IOmeter tests with the two 6TB Toshiba drives in RAID-1 on the ASRock Rack mobo with the Xeon. I'll post the results later, but they're lower than what I got in my Z87 motherboard with i7-4770k (not currently overclocked). I used a newer driver revision, and there are two links between the SAS expander and the M5016 card. I can't see why the latter would matter, but I will have to use the older driver and the single cable and see what happens when everything is 100% identical. The HW between the two systems is so similar I was surprised to see the results were not the same.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#70
I did some IOmeter tests with the two 6TB Toshiba drives in RAID-1 on the ASRock Rack mobo with the Xeon. I'll post the results later, but they're lower than what I got in my Z87 motherboard with i7-4770k (not currently overclocked). I used a newer driver revision, and there are two links between the SAS expander and the M5016 card. I can't see why the latter would matter, but I will have to use the older driver and the single cable and see what happens when everything is 100% identical. The HW between the two systems is so similar I was surprised to see the results were not the same.
So the drivers turned out to be the problem.

6.702.07 driver in Xeon system:
6TB Toshiba RAID-1 DIO DCE SA Xeon 6.x driver.png

5.2.127 driver in i7 system:
6TB Toshiba RAID-1 DIO DCE SA.png
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#71
I've already deployed a number of Netgear 10GbE equipment. Specifically the 8 and 12 port versions combined with Intel X540T2. These costs I can handle, but upgrading my perfectly good Synology units to ones with 10GbE was a bridge too far. Synology will serve as backups and can sync all night.

My current plan is to build a server that can handle NAS duties as well as be my network render farm for Solidworks and PhotoScan in addition to running direct connected VMs that will serve up the rest of the computers in the house (Wife, Kid, Guest) using UnRaid.
I started a thread on 10 gig ethernet here: http://www.storageforum.net/forum/showthread.php/10723-10-gig-Ethernet
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#72
Comparing a few stripe sizes with RAID-1 using the older 5.x drivers in the Xeon system.

32k:
6TB Toshiba RAID-1 DIO DCE SA 32k Xeon 5.x driver.png

128k:
6TB Toshiba RAID-1 DIO DCE SA 128k Xeon 5.x driver.png

512k:
6TB Toshiba RAID-1 DIO DCE SA 512k Xeon 5.x driver.png

I also spent some time looking for a good HSF for this thing since the stock Intel one is so small. The sticking point is that the HSF ultimately has to fit in a Norco RPC-4224. Per my research it has about 150mm of height for a cooler. That eliminates most of the typical and common tower HSFs that use 120mm and 140mm fans since they are taller. I also don't want to pay a small fortune.

The Cooler Master GeminII S524 Ver.2 caught my eye at ~$42 shipped. As a complication to fitting any cooler, there are capacitors rather close to the CPU socket on the PCIe slot side that are higher than the top of the CPU's heat spreader (picture). The drawing for the S524v2 at the bottom of the page says it's 58mm wide. Presumably that's centered over the CPU which means I need 29mm of clearance from the center line to the caps. From my measurements it looks like I have ~30mm between the center line of the CPU and those caps if I have the S524v2 overhang toward the I/O plate. I hope the drawing is right because it's going to be very tight. Hopefully I don't have to file or Dremel a small notch in the side of the base.

Alternatively, I could rotate the S524v2 90 degrees and have it overhang the RAM slots. All around, the S524v2 looks to be fairly tight on this motherboard, but I think it will just fit in one of those two orientations and not hang off the board or block any PCIe slots.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#75
Based on past experience with Supermicro PSUs, if you went that route, you'd better plan on keeping your NAS box in your garage unless you want something you can hear in the shower.
So I ended up buying a Supermicro chassis on eBay after some back and forth with the seller. The folks on Hardforum talked me into it. They said the new 1200W 80+ gold power supplies are very quiet unlike the older 710/900W ones. I hope they're right. They said the Supermicro chassis are much better than the Norco, etc, etc, etc.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#77
It's entirely possible that they're better than they used to be. Splash used to swear by SuperMicro stuff but I found it to be really hit or miss over the years.
I guess I'll find out soon enough. Hopefully it's as good as they led me to believe. 5000RPM chassis fans sound loud (pun intended). I expect fan mods or replacement will be necessary.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
12,852
Location
USA
#79
I actually came across something useful on reddit related to the SC846A and quieting it down. I know, I know... Try not to fall out of your chair.

https://www.reddit.com/r/DataHoarder/comments/42wbf8/supermicro_846_fanwall_replacement/
I've been a subscriber to DataHoarder for a while and occasional participate there. I sense the sarcasm but there's actually a ton of knowledge in some of those specialized subreddits even if you're going to discount the entire site for some of the crappy ones that might give the site a bad name. The value in creating a reddit account is to create your own subreddit subscription feed and unfollow the bulk of the main crap on the front page.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,181
Location
Michigan
#80
After looking at it closer I don't think the Norco fan wall will work unless I plan to lose the power supplies, which I don't, because it's wider. I suppose I should wait until I have it in my hands and hear what it sounds like using PWM on the fans before going all hog wild buying things to quiet it down.
 
Top