My 3rd NAS build

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
This will be my third attempt at building a system that will be used as a NAS in order to expand on my storage needs. My plan is to have capacity for my needs now and for what I hope will be the next several years. My previous NAS was planned to have a capacity that would last for 3-5 years but I guess once you have tons of space it's easy to fill it up. That system is just a little over two years old and it's at roughly 90% capacity.

First NAS.
8 x 1.5TB
(No longer functioning)
Second NAS.
12 x 4TB
(Still in service)

Hardware:
I decided to reach even further on this next NAS in hopes to make it last for a while.
Supermicro SC846E16-R1200B chassis, dual PWS-1K21P-1R PSU, 24x hot swap 3.5" drive carriers, 3x 80mm mid plane fans, 2x 80mm rear fans, BPN-SAS2-846EL SAS 2 backplane.
Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard
2x Intel Xeon E5-2670 (SR0H8)
2x Supermicro SNK-P0048P CPU Coolers
1x SC846 Air Shroud
192GB Memory ( 24x 8GB PC3-10600R DIMM)
1x Supermicro SAS2308 (pcie 3.0) SAS2 HBA (flashed to IT mode, fw 20.00.04.00)
1x Supermicro 3.5" Drive Tray and Silverstone 3.5" to 2.5" converter.
20 x 6TB HGST NAS 7200 RPM
4 x 1.5TB Samsung Ecogreen 5400 RPM (from non-functioning NAS build #1)
1 x Samsung SV843 2.5" enterprise 960GB
1 x Mellanox ConnectX 2 - MNPA19-XTR HP 10Gb ethernet (SFP+ DAC cable)
Total raw capacity will be 120TB + 6TB.
Total usable capacity after parity will be 96TB on the primary pool and 4.5TB on the scratch pool.

Configuration:
I'm currently using Ubuntu Server 16.04 headless as the main OS but depending on a few items I've yet to work out I may change this to Xubuntu 16.04 with a GUI.
I'm going to be using ZFS On Linux as the underlying managing filesystem.
I'll have Samba installed for sharing CIFS and NFS across the home network.
The boot drive is the Samsung SV843 that is partitioned roughly in half leaving the other half of the drive available for me to use as either an L2ARC for ZFS or possible as a SLOG. I need to run more performance tests to determine if this will be worth it or not.



I'll be creating two separate ZFS pools.

Main storage pool will consist of the following:
2 x RaidZ2 10-disk vdevs
1 x 16GB SLOG (via Samsgun SV843 partition)
1 x 400GB cache (L2ARC via SV843 partition)

Scratch pool will consist of the following:
1 x RaidZ1 4-disk vdev

I've done full-drive verifications of 18 out of the 20 drives so far and I'm just waiting on the last two drives to complete their 10-hour scan to make sure everything is healthy. The system is up and running with Ubuntu and functioning fine so far. I bought two of the Mellanox 10Gb ethernet cards to put one into this NAS and the other into my existing NAS to direct connect both systems. They haven't arrived yet but hopefully some time next week. I plan to backup the entire pool on the existing NAS temporarily so that I can do some maintenance on that system. When I attempted to rsync the pool, I'm estimating some 60+ hours over my current 1Gb network so I'm hoping I can squeeze a little more speed out of the 10Gb adapter.

I'll post up some pictures of the setup a little later along with some performance numbers. So far the noise of this system isn't bad. It's not quiet by any means but for a system living in my basement it's perfectly fine.

sm_846_front.jpg

sm_846_top_inside.jpg

sm_846_top_inside_ssd.jpg

lsi_hba.jpg

backplane_view.jpg

drive_tower.jpg

lsblk_overview_resized.jpg

zpool_overview_resized.jpg
 
Last edited:

Clocker

Storage? I am Storage!
Joined
Jan 14, 2002
Messages
3,554
Location
USA
You could sell Crashplan subscriptions out of your house lol. What is most of that data?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
The built-in ipmi offers some crude power draw measurements. I don't own anything like a kill-a-watt.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
What is most of that data?

Merc and I are the only pRon collectors so it can't be that...

Edit: Seriously though...Aren't the dual xeons and 192 GB overkill? Or is there an actual need for that crazy horsepower to perform all the parity calculations?

Edit2: Cost?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Most of the data is media from photography and my movie collection backup. I'm not a pR0n collector like you guys.

Sure, the CPUs and RAM are overkill but I'll be making use of it. ZFS can benefit from the large amount of RAM for ARC. I will also plan to install Plex to leverage the CPUs for transcoding. The Xeons and RAM are fairly inexpensive these days. The entire chassis, two 1200W PSUs, MB, CPU, RAM, HBA, SAS expander/backplane, cables, etc without the HHDs was $1050. It's a quality chassis.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
The Xeons and RAM are fairly inexpensive these days. The entire chassis, two 1200W PSUs, MB, CPU, RAM, HBA, SAS expander/backplane, cables, etc without the HHDs was $1050. It's a quality chassis.

Holly crap that is cheap. Really cheap. I see Xeons and think hundreds to thousands of dollars each. 24x 8GB sticks I think several hundreds also, so the whole enchilada for just over a grand is really cheap.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
I noticed that one of my drives might be having an issue in my NAS. I saw some strange read errors the other day and decided to do a full zfs scrub only to have ZFS officially mark the drive as faulted. From what I can tell Ubuntu also kicked the drive from the system as it was no longer list.

Time to see how HGST does for an RMA on their drives.


Code:
doug@salty:/naspool_01/samba-share/files$ sudo zpool status
  pool: naspool_01
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: scrub in progress since Fri Aug 18 19:48:07 2017
    9.56T scanned out of 26.2T at 1.15G/s, 4h6m to go
    6.46M repaired, 36.54% done
config:

        NAME                                   STATE     READ WRITE CKSUM
        naspool_01                             DEGRADED     0     0     0
          raidz2-0                             ONLINE       0     0     0
            ata-HGST_HDN726060ALE610_K1G11111  ONLINE       0     0     0
            ata-HGST_HDN726060ALE610_K1G11112  ONLINE       0     0     0
            ata-HGST_HDN726060ALE610_NCG11113  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11114  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11115  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11116  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11117  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11118  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11119  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11120  ONLINE       0     0     0
          raidz2-1                             DEGRADED     0     0     0
            ata-HGST_HDN726060ALE614_NCH11121  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11122  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11123  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11124  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11125  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11126  FAULTED    316   119     0  too many errors  (repairing)
            ata-HGST_HDN726060ALE614_NCH11127  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11128  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11129  ONLINE       0     0     0
            ata-HGST_HDN726060ALE614_NCH11130  ONLINE       0     0     0

errors: No known data errors
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Is it worth using a refurb at this point, or will they send you a new drive? At least a 6TB 7200 RPM drive should rebuild fairly quickly.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
My WD Red drives take that long and are 8TB at 5400RPM. I figured 6TB drives have fewer platters and with higher RPM would be faster than that.
Anyway Z2 provides a spare drive after that bad one and he should have another set of the data somewhere.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
I'm going to pull one of my 6TB drives from my workstation and replace the bad drive with that while I wait for the replacement. I'm assuming they'll send me a refurb drive as a replacement. I'll put that back in my workstation and use it there from here on out.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
ZFS estimates about 7.5 hours to resilver the replacement drive. I'll be curious what it ends up being when it completes.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
For whatever it's worth, the replacement disk resilvered 1.25TB in 9h21m.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
I finally got my replacement drive yesterday 9/26/2017 from HGTS. Here's the timeline of the RMA:

Filed RMA request: August 18th 2017
Shipped drive from east coast to California: September 1st.
Drive received at HGST in CA: September 7th.
HGST notified me my replacement was on its way: September 11.
Replacement drive actually shipped via UPS: September 21st.
Drive received: September 26th.

I realize I took some time to ship it out but not counting that time and only counting the time since they received it...it took 14 days to process and 5 days to ship to me. I'm happy I decided to resilver my zfs pool with my spare drive in my workstation. I still need to test the replacement and make sure it's good.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I finally got my replacement drive yesterday 9/26/2017 from HGTS. Here's the timeline of the RMA:

Filed RMA request: August 18th 2017
Shipped drive from east coast to California: September 1st.
Drive received at HGST in CA: September 7th.
HGST notified me my replacement was on its way: September 11.
Replacement drive actually shipped via UPS: September 21st.
Drive received: September 26th.

I realize I took some time to ship it out but not counting that time and only counting the time since they received it...it took 14 days to process and 5 days to ship to me. I'm happy I decided to resilver my zfs pool with my spare drive in my workstation. I still need to test the replacement and make sure it's good.

Did you receive a new drive or a refurb?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
On further inspection it appears the drive is new and I believe the reason the RMA took so long is because the multiple pages of invoice suggest that HGST purchased the replacement from WD and it shipped from Perai, Penang Malaysia. They listed a price of $252 and they also paid the duties and taxes.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
On further inspection it appears the drive is new and I believe the reason the RMA took so long is because the multiple pages of invoice suggest that HGST purchased the replacement from WD and it shipped from Perai, Penang Malaysia. They listed a price of $252 and they also paid the duties and taxes.

That doesn't seem very efficient but is good for you. :)

It makes me wonder if the helium filled drives are serviceable at all.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
On further inspection it appears the drive is new and I believe the reason the RMA took so long is because the multiple pages of invoice suggest that HGST purchased the replacement from WD and it shipped from Perai, Penang Malaysia. They listed a price of $252 and they also paid the duties and taxes.
HGST is just a name brand for WD, so I doubt that they bought it from themselves. Since they're importing the drive, they have to declare a value and pay the fees.

If WD has a manufacturing plant in Malaysia, you were probably waiting for them to make the drive. If they have refurbs, they send those out first.
 
Top