Proposed NAS build

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I guess this should have it's own thread instead of sidetracking the other one...

Here's what I'm thinking for a new controlled cost server / NAS box.

CPU: Intel Core i3-4130
Motherboard: ASRock Rack E3C224
RAM: 2x Crucial 16GB (2 x 8GB) 240-Pin ECC Unbuffered DDR3-1600L
Case: NORCO RPC-4224
Drives: 6x or 8x 6TB Toshiba Enterprise 7200RPM 128MiB Cache SATA 6.0Gbps (MG04ACA600E)
HW RAID card: RAID-6 capable SAS 2.0 6Gbps RAID card with 2 SFF-8087 - Dell PERC H700 1GB or PERC H710p or equivalent.
SAS Expander: HP SAS Expander
System Drive: 2x 250GB Samsung 850 Evo in RAID-1
Power Supply: Something 500-600W with good efficiency, like a SeaSonic G Series SSR-550RM

Thoughts on the component choices?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Norco will tell you over and over that they want 12x native Molex (no splitters) connectors for powering their backplanes. That means really having to hunt or getting a replacement cable kit for whatever modular PSU you wind up using. There's also a 3x120mm fan plate for the 4224 that you'll probably want since the default plate is 4x80mm and tends to make for a noisier system. 600W is probably just fine but ultimately you'll probably want something bigger if you're going to fill all the bays. Wiring in the Norco is going to be pretty ugly if you're using all the bays.

Don't forget to get your SAS cables. Monoprice usually has 'em for a lot less than other places I've looked.

I take it that part of your goal is to ultimately add drives and do online expansions of your arrays?
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,811
Location
Eglin AFB Area
Website
sedrosken.xyz
Damn, that's a lot of storage. I can't even fill 1TB, though after backups and setup files I can certainly come close. Might upgrade to a 2TB when I finally do that build I've been putting off for forever.

Your build looks nice and solid though. I've seen people use Pentiums or even Atoms for NAS devices that serve up a ton of drives and they tend to work alright. You're going to run into a network bandwidth ceiling long before you come close to saturating those SATA channels. I want to ask why you're going with so much RAM for a NAS build as well, it seems rather overkill to me? You seem to know much more on the subject than I do. When I ran a NAS server it was fine with 2GB, though it WAS only serving up a 40GB and a 250GB drive.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Also, there's two different models of RPC4224. Make SURE you get one with the newer backplanes, because Norco can no longer supply the older ones. I know this because both mine are older and I had to BEG to get status-unknown spares from them (one of which just happened to be defect free). You might want to contact their support for details. The support guy, once you get his attention, will respond promptly.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Norco will tell you over and over that they want 12x native Molex (no splitters) connectors for powering their backplanes. That means really having to hunt or getting a replacement cable kit for whatever modular PSU you wind up using.
That doesn't really make a lot of sense. The overall cable length and wire gauge is all that should matter. What power supply has 12 Molex outputs? I'd be tempted to buy some splitters and cut them up to make custom cables.

There's also a 3x120mm fan plate for the 4224 that you'll probably want since the default plate is 4x80mm and tends to make for a noisier system.
Yes, I saw that and was thinking about it.

600W is probably just fine but ultimately you'll probably want something bigger if you're going to fill all the bays.
Why's that? The drives are rated for 11W a piece. Even 24 of them is only 264W. There's no way the rest of the system is using 300W, though I recognize I don't want to run the system at 100% of the supply's rated load.

I take it that part of your goal is to ultimately add drives and do online expansions of your arrays?
Yes, that's the idea.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I want to ask why you're going with so much RAM for a NAS build as well, it seems rather overkill to me?
The system will use it at as cache. Does it need 32GB? Probably not. Will it hurt? No. I've got 16GB in my current server / NAS.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Also, there's two different models of RPC4224. Make SURE you get one with the newer backplanes, because Norco can no longer supply the older ones. I know this because both mine are older and I had to BEG to get status-unknown spares from them (one of which just happened to be defect free). You might want to contact their support for details. The support guy, once you get his attention, will respond promptly.
How are you supposed to make sure you get the newer backplanes? The reviews on Amazon and Newegg are all over the place. People are talking about getting the 120mm fan divider and the SSD tray with more recent purchases. However, there is a report on Amazon from a few days ago of someone getting old SAS 1.0 backplanes. Then someone on Newegg reported in October getting SAS 3.0 backplanes, though he didn't seem certain.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
That doesn't really make a lot of sense. The overall cable length and wire gauge is all that should matter. What power supply has 12 Molex outputs? I'd be tempted to buy some splitters and cut them up to make custom cables.

Their contention is that it's pretty much impossible to verify the quality of a splitter with regard to, for example, intermittent contact issues, and that power problems are the main reason for issues with backplanes. The support folks were pretty insistent that anyone filling a chassis needs to get 12 direct molex connections. AFAIK, that's only possible with relatively high wattage modular PSUs.

As far as backplane type, the older units, like mine, have yellow PCBs and the newer ones are green. They have a different cutout and are not interchangeable, per Norco. I don't know how to control what you're sold unless you're buying direct from them. I only know that this is an issue at all because I had a backplane fail.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Their contention is that it's pretty much impossible to verify the quality of a splitter with regard to, for example, intermittent contact issues, and that power problems are the main reason for issues with backplanes. The support folks were pretty insistent that anyone filling a chassis needs to get 12 direct molex connections. AFAIK, that's only possible with relatively high wattage modular PSUs.
Per the latest reports I've seen in reviews they only have one Molex connector per backplane now. That's what the pictures at Newegg show too.

As far as backplane type, the older units, like mine, have yellow PCBs and the newer ones are green. They have a different cutout and are not interchangeable, per Norco. I don't know how to control what you're sold unless you're buying direct from them. I only know that this is an issue at all because I had a backplane fail.
FWIW, ones in the Newegg picture are green and say 12Gb V2.0. The ones in the picture at Norco's site are green with two Molex connectors. There must be more than just two designs. BTW, do you mean electrically interchangeable or mechanically interchangeable? I can't think why the SAS expander / controller would care if they were matching or not.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Mechanically. I had a hell of a time getting a replacement backplane after one of them fizzled. I actually asked for a newer one but was told they won't fit in the chassis I have.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
775
Location
on the floor
Based on these 2 articles you would prob encounter problems with your chosen power supply when the Norco is fully populated, due to the start up current demands on the 12v rail:

http://45drives.blogspot.ca/2015/05/the-power-behind-large-data-storage.html?m=1

Here they see 51 Amps of start up demand with 45 consumer gradedrives spinning at just 5900 rpm

http://45drives.blogspot.ca/2015/06/power-draw-of-enterprise-class-hard.html?m=1

Here with 45 WD enterprise grade drives spinning at 7200 rpm they see 86 Amps of initial current demand.

So looks like you would need to upsize the psu considerably to deal with inital start up current demands, or enable staggered spin up which should drop the start up demands to a much more manageable 20A or so (also slows boot time considerably but shouldn't be a biggie for it's intended usage).

Edit to add: For some reason I thought the case held 40+ drives. Maxxed out at 24 drives you would prob see about 45A of initial 12v demand, for just the drives. So go for about 60A of 12 V capacity to deal with drives plus other components?
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Hmm... it seems the Dell PERC cards don't support staggered spin-up. I'm not sure the system will ever end up with 24 drives in it though I'm building for that capability. Still, I guess bumping the power supply up a bit won't hurt.

I also had no idea that staggered spin-up added minutes to the boot time. 4 minutes in their specific case! I figured it kicked them off on a one or two second delay from each other.
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
It looks like the IBM ServeRAID M5016 might be a better LSI 2208 based HW RAID card to use than the Dell H710p. It supports staggered spin-up and uses a supercap power module instead of a battery. Unfortunately, they seem harder to come by. Right now I only see the 1gB card available from a seller in China.

The 1gB cache module + M5110 which uses the same LSI 2208 controller should be equivalent though it seems to not support RAID-6. :(
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
The last time I was doing a large RAQID HDD install I used 3Ware, and they let me set staggered spin-up to 3 drives every 2 seconds. It still added time to the boot. Keep in mind that without staggered spin-up they start spinning during POST and are ready to go. With staggered spin-up they do nothing during POST and wait for the RAID firmware. Even if they all spun up at once at that time it would be slower.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Hmm... it seems the Dell PERC cards don't support staggered spin-up. I'm not sure the system will ever end up with 24 drives in it though I'm building for that capability. Still, I guess bumping the power supply up a bit won't hurt.

I also had no idea that staggered spin-up added minutes to the boot time. 4 minutes in their specific case! I figured it kicked them off on a one or two second delay from each other.


This is the MegaCLI output from some of our Dell servers. Visibly, the drives do seem to spin-up in groups. This does seem to indicate staggered spin-up is enabled by default (and doesn't cause any substantial boot delay):

Code:
Product Name    : PERC H710P Adapter
FW Package Build: 21.2.0-0007

                    Mfg. Data
                ================
Mfg. Date       : 03/05/14
Rework Date     : 03/05/14
Revision No     : A03
Battery FRU     : N/A
...
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 12s
...


Different Server:

Code:
Product Name    : PERC H700 Integrated
FW Package Build: 12.10.6-0001

                    Mfg. Data
                ================
Mfg. Date       : 01/12/12
Rework Date     : 01/12/12
Revision No     : A05
Battery FRU     : N/A
...
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 12s
...

Both the spin-up delay and spin-up drive count are configurable on a per adapter basis with MegaCLI. Let me know if you want any more specific info.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Hmmm... Thanks for the info. Maybe it was just that it's not controllable from the BIOS on the Dell cards.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Yes, the BIOS options are limited to the primary management functions. You can create an array/VD, delete an array, set a hot spare, and view the status of an array or drive. For anything more, you'll probably want to use MegaCLI or the new software set to replace it. I have used a few LSI branded cards and their Dell equivalents; Sometimes the LSI cards have a few more options in their BIOS. As the IBM and HP cards are also re-badged LSI controllers, I wouldn't expect them to be any different.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Well, the supercapacitor the M5016 uses some appeal over the Li-Ion battery as it shouldn't really need replacing / wear like a battery. The IBM version also supports more HDDs ( 128 ), though I don't think the 32 disk limit of the Dell version is likely to be an actual limit to me. Based on what I read the HP versions have really spotty compatibility outside of HP servers. The Dell has CacheCade and Cut Through IO enabled by default in their firmware. The IBM has neither. As I understand it, CTIO doesn't help spinning drives. Using a SSD or two as a cache (CacheCade) I also don't really see helping me since there's no other internal source of data aside from RAM that can really exceed the array's write speed and the usage patterns is far from any sort of typical enterprise load.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
My understanding is that the Dell cards with the "NV" option (default on all P models) have flash backed cache, which sounds similar to the "supercapacitor" that IBM advertises. Looking at some photos, the supercapacitor appears to contain a Li-Ion battery pack. Regardless, a capacitor is just another word for a non user-replaceable battery. I do recommend replacing the PERC batteries after 5 years (though they may last ~7). New ones can be obtained via Dell directly. New, but off brand, ones can be purchased through Amazon (not recommended unless last resort); Used ones can be purchased through e-bay (not recommended). The batteries are easily replaced in any model made in the last 10 years (PERC5+).

I'm not sure why Dell advertises a 32 drive limit. Perhaps that's the biggest chassis they offer or the most that will fit in a single VD.

Code:
MegaCli64 -AdpAllInfo -aALL
                                     
Adapter #0

==============================================================================
                    Versions
                ================
Product Name    : PERC H710P Adapter
Serial No       : ------
FW Package Build: 21.2.0-0007

                    Mfg. Data
                ================
Mfg. Date       : 03/05/14
Rework Date     : 03/05/14
Revision No     : A03
Battery FRU     : N/A

                Image Versions in Flash:
                ================
BIOS Version       : 5.38.00_4.12.05.00_0x05260000
Ctrl-R Version     : 4.03-0002
Preboot CLI Version: 05.00-03:#%00008
FW Version         : 3.130.05-2086
NVDATA Version     : 2.1108.03-0095
Boot Block Version : 2.03.00.00-0004
BOOT Version       : 06.253.57.219

                Pending Images in Flash
                ================
None

                PCI Info
                ================
Vendor Id       : 1000
Device Id       : 005b
SubVendorId     : 1028
SubDeviceId     : 1f31

Host Interface  : PCIE

ChipRevision    : D1

Number of Frontend Port: 0 
Device Interface  : PCIE

Number of Backend Port: 8 
Port  :  Address
0        500056b37789abff 
1        0000000000000000 
2        0000000000000000 
3        0000000000000000 
4        0000000000000000 
5        0000000000000000 
6        0000000000000000 
7        0000000000000000 

                HW Configuration
                ================
SAS Address      : 5c81f660ef1c8300
BBU              : Present
Alarm            : Absent
NVRAM            : Present
Serial Debugger  : Present
Memory           : Present
Flash            : Present
Memory Size      : 1024MB
TPM              : Absent
On board Expander: Absent
Upgrade Key      : Absent
Temperature sensor for ROC    : Present
Temperature sensor for controller    : Present

ROC temperature : 85  degree Celcius
Controller temperature : 85  degree Celcius

                Settings
                ================
Current Time                     : 17:44:29 12/30, 2015
Predictive Fail Poll Interval    : 300sec
Interrupt Throttle Active Count  : 16
Interrupt Throttle Completion    : 50us
Rebuild Rate                     : 30%
PR Rate                          : 30%
BGI Rate                         : 30%
Check Consistency Rate           : 30%
Reconstruction Rate              : 30%
Cache Flush Interval             : 4s
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 12s
Physical Drive Coercion Mode     : 128MB
Cluster Mode                     : Disabled
Alarm                            : Disabled
Auto Rebuild                     : Enabled
Battery Warning                  : Enabled
Ecc Bucket Size                  : 255
Ecc Bucket Leak Rate             : 240 Minutes
Restore HotSpare on Insertion    : Disabled
Expose Enclosure Devices         : Disabled
Maintain PD Fail History         : Disabled
Host Request Reordering          : Enabled
Auto Detect BackPlane Enabled    : SGPIO/i2c SEP
Load Balance Mode                : Auto
Use FDE Only                     : Yes
Security Key Assigned            : No
Security Key Failed              : No
Security Key Not Backedup        : No
Default LD PowerSave Policy      : Controller Defined
Maximum number of direct attached drives to spin up in 1 min : 20 
Any Offline VD Cache Preserved   : No
Allow Boot with Preserved Cache  : No
Disable Online Controller Reset  : No
PFK in NVRAM                     : No
Use disk activity for locate     : No

                Capabilities
                ================
RAID Level Supported             : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span
Supported Drives                 : SAS, SATA

Allowed Mixing:

Mix in Enclosure Allowed

                Status
                ================
ECC Bucket Count                 : 0

                Limitations
                ================
Max Arms Per VD          : 32 
Max Spans Per VD         : 8 
Max Arrays               : 128 
Max Number of VDs        : 64 
Max Parallel Commands    : 1008 
Max SGE Count            : 60 
Max Data Transfer Size   : 8192 sectors 
Max Strips PerIO         : 42 
Max LD per array         : 16 
Min Strip Size           : 64 KB
Max Strip Size           : 1.0 MB
Max Configurable CacheCade Size: 512 GB
Current Size of CacheCade      : 0 GB
Current Size of FW Cache       : 883 MB

                Device Present
                ================
Virtual Drives    : 2 
  Degraded        : 0 
  Offline         : 0 
Physical Devices  : 8 
  Disks           : 6 
  Critical Disks  : 0 
  Failed Disks    : 0 

                Supported Adapter Operations
                ================
Rebuild Rate                    : Yes
CC Rate                         : Yes
BGI Rate                        : Yes
Reconstruct Rate                : Yes
Patrol Read Rate                : Yes
Alarm Control                   : Yes
Cluster Support                 : No
BBU                             : No
Spanning                        : Yes
Dedicated Hot Spare             : Yes
Revertible Hot Spares           : Yes
Foreign Config Import           : Yes
Self Diagnostic                 : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares               : Yes
Deny SCSI Passthrough           : No
Deny SMP Passthrough            : No
Deny STP Passthrough            : No
Support Security                : Yes
Snapshot Enabled                : No
Support the OCE without adding drives : Yes
Support PFK                     : No
Support PI                      : No
Support Boot Time PFK Change    : No
Disable Online PFK Change       : No
Support Shield State            : No
Block SSD Write Disk Cache Change: No

                Supported VD Operations
                ================
Read Policy          : Yes
Write Policy         : Yes
IO Policy            : Yes
Access Policy        : Yes
Disk Cache Policy    : Yes
Reconstruction       : Yes
Deny Locate          : No
Deny CC              : No
Allow Ctrl Encryption: No
Enable LDBBM         : Yes
Support Breakmirror  : Yes
Power Savings        : Yes

                Supported PD Operations
                ================
Force Online                            : Yes
Force Offline                           : Yes
Force Rebuild                           : Yes
Deny Force Failed                       : No
Deny Force Good/Bad                     : No
Deny Missing Replace                    : No
Deny Clear                              : Yes
Deny Locate                             : No
Support Temperature                     : Yes
Disable Copyback                        : No
Enable JBOD                             : No
Enable Copyback on SMART                : No
Enable Copyback to SSD on SMART Error   : No
Enable SSD Patrol Read                  : No
PR Correct Unconfigured Areas           : Yes
Enable Spin Down of UnConfigured Drives : No
Disable Spin Down of hot spares         : Yes
Spin Down time                          : 30 
T10 Power State                         : Yes
                Error Counters
                ================
Memory Correctable Errors   : 0 
Memory Uncorrectable Errors : 0 

                Cluster Information
                ================
Cluster Permitted     : No
Cluster Active        : No

                Default Settings
                ================
Phy Polarity                     : 0 
Phy PolaritySplit                : 0 
Background Rate                  : 30 
Strip Size                       : 64kB
Flush Time                       : 4 seconds
Write Policy                     : WB
Read Policy                      : Adaptive
Cache When BBU Bad               : Disabled
Cached IO                        : No
SMART Mode                       : Mode 6
Alarm Disable                    : No
Coercion Mode                    : 128MB
ZCR Config                       : Unknown
Dirty LED Shows Drive Activity   : No
BIOS Continue on Error           : No
Spin Down Mode                   : None
Allowed Device Type              : SAS/SATA Mix
Allow Mix in Enclosure           : Yes
Allow HDD SAS/SATA Mix in VD     : No
Allow SSD SAS/SATA Mix in VD     : No
Allow HDD/SSD Mix in VD          : No
Allow SATA in Cluster            : No
Max Chained Enclosures           : 4 
Disable Ctrl-R                   : No
Enable Web BIOS                  : No
Direct PD Mapping                : Yes
BIOS Enumerate VDs               : Yes
Restore Hot Spare on Insertion   : No
Expose Enclosure Devices         : No
Maintain PD Fail History         : No
Disable Puncturing               : No
Zero Based Enclosure Enumeration : Yes
PreBoot CLI Enabled              : No
LED Show Drive Activity          : Yes
Cluster Disable                  : Yes
SAS Disable                      : No
Auto Detect BackPlane Enable     : SGPIO/i2c SEP
Use FDE Only                     : Yes
Enable Led Header                : No
Delay during POST                : 0 
EnableCrashDump                  : No
Disable Online Controller Reset  : No
EnableLDBBM                      : Yes
Un-Certified Hard Disk Drives    : Allow
Treat Single span R1E as R10     : Yes
Max LD per array                 : 16
Power Saving option              : Don't spin down unconfigured drives
Don't spin down Hot spares
Don't Auto spin down Configured Drives
Power settings apply to all drives - individual PD/LD power settings cannot be set
Max power savings option is  not allowed for LDs. Only T10 power conditions are to be used.
Cached writes are not used for spun down VDs
Can schedule disable power savings at controller level
Default spin down time in minutes: 30 
Enable JBOD                      : No
TTY Log In Flash                 : No
Auto Enhanced Import             : No
BreakMirror RAID Support         : Yes
Disable Join Mirror              : Yes
Enable Shield State              : No
Time taken to detect CME         : 60s

Exit Code: 0x00
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
My understanding is that the Dell cards with the "NV" option (default on all P models) have flash backed cache, which sounds similar to the "supercapacitor" that IBM advertises. Looking at some photos, the supercapacitor appears to contain a Li-Ion battery pack. Regardless, a capacitor is just another word for a non user-replaceable battery. I do recommend replacing the PERC batteries after 5 years (though they may last ~7). New ones can be obtained via Dell directly. New, but off brand, ones can be purchased through Amazon (not recommended unless last resort); Used ones can be purchased through e-bay (not recommended). The batteries are easily replaced in any model made in the last 10 years (PERC5+).
The Dell cards with the "NV" option still use a Lithium-ion battery. They use the battery to provide power while the contents of the cache is written to the flash. The M5016 uses an actual supercapacitor made by Tecate, not a Lithium-ion battery, to provide power while the contents of the cache is written to the flash. Given the choice between the two, I'll take the supercapacitor.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
What leads you to think it has an expander? The product page says it needs a controller with 4 SFF-8087 connectors.

Their website said: "miniSAS (SFF-8087) 6Gb/s HDD Backplane with SGPIO" which I confused with a SAS expander. It says it requires "a controller with at least four SFF-8087 or SFF-8643 connectors is required" so it's not an expander, my mistake.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Is there anything in this thread on STH that might interest you for your build? That person has a couple of the nice 36 drive Supermicro cases for very reasonable prices. I was considering offering on one of them.
Interesting. I hadn't seen that thread. The SC847 supermicro chassis with SAS2 is a little interesting to me, but I'm not sure about the 1400W power supplies or the 12 rear bays that force you to use low profile cards and hardware. That seems excessive but I guess it's not much more than a Norco 4224 + power supply.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Based on past experience with Supermicro PSUs, if you went that route, you'd better plan on keeping your NAS box in your garage unless you want something you can hear in the shower.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Do I want 512e or 4Kn HDDs? I presume the former, since I can't seem to determine if the HW RAID cards play nice with 4Kn drives, but I wasn't quite sure.

Edit: It looks like Avago (LSI) shows compatibility for both 4Kn SATA and SAS drives with their Gen 2.5 controllers which I am considering. I'm not sure if there is any advantage to 4Kn over 512e drives though.
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
It looks like 4Kn has no real advantage. It's not supported until Windows 8. Further any program that tries to directly access the HW directly 512 byte sectors may fail / error out. The only potential advantage I see is that you can support up to 16TB without going GPT. However, that doesn't seem to nearly offset the all the possible pitfalls.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So I finally got home from Asia and had a chance to play with the IBM ServeRAID M5016 card I bought on eBay. I got it installed into one of my i7 boxes without any fuss. I didn't have to tape any PCIe pins. I downloaded the latest firmware, driver, and MSM version from IBM. None of which will install directly from the .exe package. It complains about not being compatible with the system. However, you can extract the files from the .exe package with 7-zip and then run them / use them. My first two 7200RPM 6TB Toshiba SATA Enterprise HDDs won't arrive until later this week, but the controller seems happy with the supercap (BBU) and everything else. The firmware updated without any issues. I used the Server 2008 r2 drivers for my Windows 7 install.

My HP SAS expander is also due the middle of this week. I'll know more once I get it and the drives. Assuming everything works as it should I will buy more of the 6TB drives and the rest of the parts for the NAS (mobo, CPU, RAM, case, PS, etc...).
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I did have some weirdness when I upgraded my storage box from Server 2012 to 2012r2. The LSI driver wouldn't initialize until I uninstalled and scanned for new hardware in Device Manager. I wrote a script that automated doing that, but some time in the last few months it started detecting and working properly. This is particularly curious since I never had any issues at all under plain old Server 2012.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
The HP SAS Expander arrived today along with the two SFF-8087 SAS cables I bought. I grabbed two random SATA drives I had lying around a 500GB Seagate, and a 1.5TB Samsung and connected them to the expander with a SFF-8087 to SATA cable. Everything seems happy. I tried a few different ports on the expander and it found the drives regardless. I'm not sure how to tell if dual linking is working or not. I had the expander connected with two SFF-8087 cables to the M5016, as well as trying each single port. With a single port I see it's connected to either 0-3 or 4-7 in the MSM GUI depending on the port I use. With both cables connected the MSM GUI shows 4-7, not 0-7. I have no idea if that's just how it shows dual linking or if it's only connecting over one cable.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I sent ASRock a message through their technical support form on the website asking them about the CPU support list. I'll see what they say.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Yes, and they may never do so.
That's basically what ASRock told me. They should work, but they haven't tested them since they're focused mostly on testing all the possible Xeon chips. The main takeaway from my e-mail back and forth with them is that they haven't taken any steps to block the use of CPUs not on the list.

I'm now undecided between the i3-3170 and the i5-4590. They are $99 and $159 respectively at Microcenter. I don't think the i5 will use much more power when idle or the system is not doing much, but it has more processing grunt should it be needed.

In other news, the Norco case is now more than $100 more expensive than it was when I started looking at this build. :cursin:
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
I don't think the i5 will use much more power when idle or the system is not doing much, but it has more processing grunt should it be needed.

For CPU needs focused on singled threaded, serialized calculations, the i3 is the same as that i5. If CPU cache or multi-threading matters, choose the i5.
 
Top