Home NAS

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
Essentially all my data is ~30TB RAID6 arrays. I have at least two copies of every file plus a large number of loose drives that contain a meaningful subset of my media collection AND two sets of tapes.

I've always been curious about people with "huge" media collections. Do you have all those files backed up OFFSITE somewhere in case of fire or theft.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
That's one of the reasons I have an LTO changer. The aforementioned collection of loose drives are generally stored in my office at work.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
Hark! A post! About storage!

Since I retired some of my oldest 3TB drives today, I have a pile of them to play with. So I've started playing with Storage Virtualization products: SnapRAID and FlexRAID. I tested them on a Windows 2008R2 machine running on an i7/920 with 16GB RAM. I have a total of 12 drives and I tested them in groups of six each.

Here's what I can say so far:

FlexRAID: It can do a realtime RAID4-like parity or scheduled parity snapshots. It CAN'T do anything like realtime RAID5 or RAID6 but it CAN create an arbitrary number of parity drives for use with snapshots (somehow; I couldn't get that feature to work at all). It does support drive pooling. To date, I've only managed to get the web-based management interface to actually create a pooled storage space one time, and that was using the simplest "wizard" configuration which included many defaults I don't really like. Expert mode seems to be missing the ability to actually finish disk pool configuration. FlexRAID can operate on drives that already contain data and a failure of a single disk still leaves all other disks in the volume with a readable file system.
It costs $60.
Disk performance was entirely comparable to that of any single disk in the array, around 120MB/sec.
The documentation on the site's Wiki is apparently a work in progress and in many cases based on outdated versions of the program, referencing user interfaces that either no longer exist or that are not available in the current public release of the software.

Most alarmingly, user on the forums report difficulties with activating their paid commercial licenses for the software on newer Linux distributions, something that the support people suggest should be resolved by installing older versions of Linux. There are also a lot of circular "Don't complain about our documentation, ask on the forum" and "The thing you're complaining about on our forum is covered in our (outdated and/or factually incorrect) documentation."

So far, not so good, especially for a commercial product.

SnapRAID: Free! Open Source! Blessedly simple plaintext config! It's downright easy to add in more drives. No realtime RAID at all: It relies on the Windows Task Scheduling subsystem to make and store parity snapshots. It also doesn't seem to have any tunable parameters. It only kinda-sorta supports creating a pooled storage space and that pool is not updated in real time, only when a synchronization occurs.

The huge up side is that I had this running in about five minutes. Everything worked exactly as the documentation indicated. I stuck the drives in a volume mount point and shared that and my Windows machines were fine with it, though Samba clients saw the mount points effective as 128-byte empty files because I'm guessing Samba clients don't do Windows symlinks right yet.

There's a lot to be said for SnapRAID. For one thing, I can write to several disks independently and at full speed; each drive is effectively formatted with a normal filesystem. I can also add disks at will as long as my parity drive(s) are at least as big as the biggest other single disk in the volume; member drives can be size-mismatched. It can treat an external drive, iSCSI target or a network drive as part of a storage volume. I don't know but I suspect I could make another sort of RAID volume work with it as well since it really does not seem to care what disk space it's using so long as it's accessible by the OS. I kind of want to check if I can set a softRAID 1 and include it in my config. Since it's only reading and writing to drives that are actively being used, it's actually possible to let power management spin down drives that aren't needed.

The problems: It's not really RAID. It's more like a backup system. Disk pooling (presenting all of the data as if it were contiguous) is particularly lame in that it's dependent on frequency of synchronization rather than being updated in real time. My test drives are all 3TB, but that means that the largest single directory (thanks to mount points) I could have is also 3TB. That's certainly not insurmountable, but if I look at my full data collection I know that I have a couple top-level directories that exceed 10TB and might be annoying to balance across multiple drives. SnapRAID does not support anything more than two parity drives.

I can see SnapRAID as an incredibly simple, efficient system for maintaining storage integrity and I think it's something I am going to want to play with some more. I'm much less impressed with FlexRAID.
 
Last edited by a moderator:

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,908
Location
Somewhere in time.
Methinks the second big para in the middle of your post starting with "FlexRAID:" should have started with "SnapRAID" ?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
My main NAS will probably have Openfiler as an OS. I have yet to install and test Openfiler (will do next week), but I think there's a feature that will let me synchronize files with another network location (a Synology DS1512+), or am I too optmistic?

SnappRAID could be interesting if my main NAS would use a Windows flavor OS, but it currently doesn't look like it will.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
On the theme of storage-related posts: I got my backup system significantly improved this weekend and I'm very happy about it. I took my wife's old PC, e7200 Wolfdale, put in more RAM (up to 4 GB now) and an SSD, installed Linux Mint on it, installed ZFS for Linux to create a RAIDZ1 array from three 2 TB Hitachi 7K2000 drives, then installed CrashPlan and did a local backup of my ~2TB of data over my home network. Took about two days. ZFS was using about 30% of each Wolfdale core the whole time. Then I moved the PC to my parent's house so my Dad can use it as a basic PC. Installed CrashPlan on my Mom's PC and my Dad's PC. Fought with the router to open up the required ports, and viola, my parents two Windows PCs and Linux machine now back up to my computer (which uses a mapped RAIDZ1 array in a FreeNAS box) and my computer backs up to their Linux machine onto a RAIDZ1 array. Yay! I now have a backup going to a local RAIDZ1 array, an offside RAIDZ1 array, and one to the CrashPlan cloud.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
SnappRAID could be interesting if my main NAS would use a Windows flavor OS, but it currently doesn't look like it will.

SnapRAID also has a Linux flavor. I suspect it could be made to run on top of Openfiler if for some reason you felt like doing that (perhaps to mix LVM/RAIDz arrays and single disks?)

I still have my FreeNAS system, but I've got it replicating the content of shared directories on a Windows file server right now.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
On the theme of storage-related posts: I got my backup system significantly improved this weekend and I'm very happy about it. I took my wife's old PC, e7200 Wolfdale, put in more RAM (up to 4 GB now)

Based on my experience with FreeNAS, that extra RAM is probably what made more of a difference than anything. There's a huge jump if you can get to 8GB as well.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Merc, totally agree with RAM and ZFS, whether it's ZFSonLinux or native ZFS on FreeNAS (FreeBSD). Because the old PC was going to be offsite, I figured the limiting factor either way would be my upload speeds, and indeed, I'm hitting those at 2-3 Mbps. I did end up taking it from 2*1GB sticks to 4*1 GB sticks, primarily to make it snapper for use as a regular PC, but didn't bother throwing out the two old sticks of RAM to replace with 4*2 GB sticks.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
The only way you can make sure your backup plan is good is to simulate a failure and test if it restores your files as it should. I know that backing up files offsite isn't good enough for my companies because even with our 100Mbps symmetrical connection, it would take too much time. I have a remote backup, but it's only in last resort. My main backup is another identical NAS which sits next to the one in production. For some services, I cannot have an outage of more than a couple of hours and that includes both the time since the last backup and the MTTR (mean time to recovery).

Of course, in your case with home data, two days is quite ok and it looks like you have all your bases covered.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
I've done test restores multiple times with CrashPlan, including from the CrashPlan cloud service and more recently, with my offsite, and as long as the media is accessible, it works. The part that creates problems is that if I try to import a ZFS volume from a Linux machine (ZFSonLinux) into a FreeNAS (FreeBSD) machine, I can't, the versions seem incompatible. I haven't tested importing a ZFS volume from a Linux machine (ZFSonLinux) to another Linux machine (ZFSonLinux), but I assume it would work. Right now I have no easy way to testing that. And I haven't tried exporting a ZFS volume from FreeNAS to Linux.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
If my house burns down and I need to restore all 2+ TB, I think the fastest way is to put the Linux offsite machine on the same network as my other PCs.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
My current backup plan at work includes 3 Synology boxes. Two right next to each other on the same switch, and one 1km away on the other end of a 1Gbps fiber link. The production box syncs in near real-time with the one next to it, and that batches out to the remote a couple times a day. That backup includes snapshot capabilities.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
Quick update: Yes, you can include RAID1 mirror sets in SnapRAID snapshots. I used a couple 32GB SSDs and a Windows SoftRAID, made a parity snapshot, pulled both disks and restored the data to a 64GB thumb drive.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
It's kind of a stupid configuration, but I was also able to create a SnapRAID set with 12 3TB drives configured as three 9TB RAID5 arrays and 18TB of multiply-redundant storage.
Basically, SnapRAID allows for fully asynchronous levels of data redundancy. I can set aside couple drives in RAID1 for small files that change often and keep them in the same parity set with single large drives or with a volume set with its own redundancy and less emphasis on I/O.

And I can pull the SnapRAID drives that store the parity info when I'm not using them without breaking an array since it's not real time.

There's a lot to like about this configuration.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
I do not know what sorcery is involved in their construction, but WD's Sharespace NAS products work perfectly well in RAID0, 1 or 5 using WD Green drives. Or at least as well as anything that has WD hard drives in it can be said to work. The product documentation even touts use of Green drives as a feature.
So at least it's POSSIBLE to make them not-retarded.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
We have a Synology DS1512+ filled with WD 2TB Caviar Green and it's been working flawlessly for over a year. Using Caviar Green wasn't my choice (I wasn't working there when the pruchase was done), but I can confirm that the device works without issue. RAID5 is used.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
That's funny 'cause I've tried my collection of -EADs drives in Synology and Drobo boxes and found that I usually get a dropped member disk from a RAID5 within the first 48 hours after initialization.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
Since I just had to do both of these things...

Time to regenerate a failed 3TB drive in a 5-drive 12TB Server 2012 Software RAID5 Storage space: 3 hours 44 minutes.
Time to regenerate a failed 3TB drive in a 5-drive 12TB Synology NAS: 57 hours.

The Synology array was only about 22% full. The Server 2012 array was 83% full.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
Since I just had to do both of these things...

Time to regenerate a failed 3TB drive in a 5-drive 12TB Server 2012 Software RAID5 Storage space: 3 hours 44 minutes.
Time to regenerate a failed 3TB drive in a 5-drive 12TB Synology NAS: 57 hours.

The Synology array was only about 22% full. The Server 2012 array was 83% full.

Wow. I should probably test my 4TB drives in a 10-drive RAID6 on Synology. That might take a long time ;)
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Time to regenerate a failed 3TB drive in a 5-drive 12TB Server 2012 Software RAID5 Storage space: 3 hours 44 minutes.
Time to regenerate a failed 3TB drive in a 5-drive 12TB Synology NAS: 57 hours.
Processor in a typical Server 2012 server : multi-core Intel Xeon.
Processor in a typical Synology NAS : ARM-kin or Atom.

Those array rebuilding times aren't surprising.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
Apparently that enclosure would pretty much have to be mated with an Asrock C2750D4I in order to fully utilize both the chassis and the motherboard. I'm just going to have to find and excuse to try that.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Apparently that enclosure would pretty much have to be mated with an Asrock C2750D4I in order to fully utilize both the chassis and the motherboard.
So with the motherboard, the enclosure and an SFX power supply, it would be like a 300$ cheaper version of a Synology DS1813+ with slightly higher processing power, but without a mature storage software on it? I could see it for home use, but not in a business environment.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
So with the motherboard, the enclosure and an SFX power supply, it would be like a 300$ cheaper version of a Synology DS1813+ with slightly higher processing power, but without a mature storage software on it? I could see it for home use, but not in a business environment.

I suppose that would depend on your needs. FreeBSD or OpenSolaris with two four-disk RAIDZs and a pair of caching SSDs each? That's some pretty serious hardware.
A hypervisor with one dedicated core and each 3.5" bay as an independent disk per guest?
Storage Spaces on a Windows Server?

Clearly, it's not meant to be enterprise hardware, given the crummy PSU.

Mostly I'm thinking how many crappy older Xeons a box like that could replace, even if a Bay Trail Atom isn't exactly a high-speed CPU in the first place.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
I have enough spare drives that I think I'm going to experiment with replicating my data set off-site. I tried BTsync for a 50GB collection of .jpg files and found that it was able to build a complete off-site replicated filesystem in a day and a half and a 15GB disk image in just a couple hours. That makes me think it would be amazing for duplicating the giant collection of movies and TV shows, especially if the remote location were pre-seeded with a decent set of source data.
Hopefully I'll have some findings by next week.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
What OS are you using with BTSync? Can it be used headless under Linux?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
BTsync seems to work with whatever, at least on Windows and *nix. I use it to move database snapshots from customer sites in extremely short order. The one complaint I have is that the monitor application on Windows is tied to the user account running it, so if you're logging in under a different username, you don't get to see the monitor. It doesn't seem to be a huge RAM hog, but for saying that, I'm "only" moving around maybe 200GB of stuff total at the moment. I hope to have something around 15TB moving shortly.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
I was thinking of switching from dropbox to this to move database snapshots. The Linux server has no GUI components which is why I was asking if it worked headless. Hopefully it doesn't need much memory to run either.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
RAM usage looks to be tied to the number of files being synchronized. I sync'd about 100GB of fairly large files overnight (in addition to my normal activity) and RAM usage remained pretty low, around 200MB. I added in just 5GB of pictures and it increased to 360MB. I have resources that I can devote to this and in point of fact I don't even REALLY care if I have to dedicate a 4GB DIMM to it on each end, though I suspect I'd have an easier time if I pre-seeded the filesystem on the remote machine with a more complete copy of the data beforehand.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
I finally put together the parts for my next NAS and I'm working through some configuration and performance tests. I decided to go with ZFS on Linux so that I have a playground to learn on and to increase my skills in this area. This was not intended to be a budget build so I'm sure you'll have some head-scratching moments when you look over the parts list.

CPU/MB/RAM
I decided to go with a full socket 1150 motherboard vs some of those Intel Atom setups that are popular. I found a combo deal on newegg which combined the Supermicro X10SL7-F-O and an Intel Xeon E3-1270V3 (Haswell). I added 16GB (2 x 8GB) of ECC RAM and I plan to increase that to 32GB shortly which is why I listed 32GB in the build list below. I decided to go with a bit more CPU power than I originally planned because I wanted to be able to give enough to ZFS and still use some extra for processing other work in the future. I'll be running some kind of Samba/NFS/CIFS to transport data to other systems in my house. I also plan to run some media server components once I've digitized movies into this NAS.

The supermicro board sold this config for me. I did a lot of reading and research on popular configs, and this board won me over. It comes with a built-in LSI 2308 adapter giving me 8 SAS 6/Gb ports plus the 6 built-in ports on the motherboard. The LSI adapter works in IT mode making it so I don't have to configure all the drives as 8 x RAID 0 to get the OS to see them. Basically they're all just JBOD. All the drives were seen right away through the OS and it was painless. When I priced out other configurations they were more expensive and more quirky than this setup. The X10SL7-F-O also comes with a full IPMI 2.0 BMC for easy remote management (and it works awesome). There are 3 x 1Gb NICs out the back which one of them is for the BMC. I can eventually team the two non-BMC NICs with my layer 2 switch to play with higher amounts of concurrency.

Case
After a long search for the right case for me, I chose the Rosewill L4411. It's an interesting mix of of good space, decent quantity of hotswap bays, good cooling, and relatively low price. The case comes with 3 x 120mm fans, 2x 80mm fans, 12 x SATA cables, front dust filter, and a metal locking front panel. It's very roomy inside and can even be rack-mounted if needed. With everything installed and running, the case if very quiet. I'd have no issue putting this on my desk next to me if I wanted. The drives seem to be cooled very well by the 3 x 120mm fans. I posted temps a bit further down in my post. The one negative I've seen with this case is that the hotswap bays won't recognize the Samsung 850 Pro SSD drives. This isn't a huge issue because I wasn't originally planning to mount them in the bays, but it was a surprise none the less. All info I read said the hotswap bays were simple pass-through. The SSDs are free-floating at the moment but I plan to mount them with sticky velcro for simplicity.

HDDs
I chose to go with 8 x HGST 4TB NAS for this build. I've had good luck with these in other builds and I've seen other decent reviews of them. I may decide to max out the bays on my case and add 4 more to the config down the road. If I decide to grow this larger than 12 drives, I'll look into replacing the case. That will also mean adding another adapter into the x8 PCIe 3.0 slot which gives me further expansion if needed.

SSDs
I imagine several of you will likely question the reason I added two higher-end Samsun 850 Pro SSDs to a NAS device. I did this to experiment with various things. The 128GB SSD is being used as a boot drive for now and I'll likely use it to stage other media-related work. The 256GB SSD is intended to experiment with a ZFS SLOG and also an L2ARC configuration for ZFS. It's way too big for a SLOG, but the L2ARC could take advantage of the size. Both are likely not needed in my home environment but I'm using it to learn/experiment. I chose the Samsung 850 Pro because of the increased durability and 10-year warranty. Given the nature of L2ARC and SLOG, it will possibly have more IO than normal going through it so I decided to go with a more-durable drive.

Power supply
I went with Seasonic in a 650W 80 plus gold for this build. This will give me decent efficiency and some room to grow. It's a bit overkill; I know.

OS
I'm going to play around with the OS but for now I chose Xubuntu 14.04.1 LTS 64-bit since I'm familiar with it. It may not be the best option but I'd like to experiment and find out for myself before putting this into full production in my house.

ZFS config
raidz2
zfs_arc_max=8589934592
Code:
NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
nfspool    29T  45.1G  29.0T     0%  1.00x  ONLINE  -
doug@doug-X10SL7-F:~$ sudo zpool status
  pool: nfspool
 state: ONLINE
  scan: none requested
config:

        NAME                        STATE     READ WRITE CKSUM
        nfspool                     ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            wwn-0x5000cca24ccd25c9  ONLINE       0     0     0
            wwn-0x5000cca24ccc6404  ONLINE       0     0     0
            wwn-0x5000cca24cc6129e  ONLINE       0     0     0
            wwn-0x5000cca24ccd0a9b  ONLINE       0     0     0
            wwn-0x5000cca24ccd5e18  ONLINE       0     0     0
            wwn-0x5000cca24cccb387  ONLINE       0     0     0
            wwn-0x5000cca24cccb39d  ONLINE       0     0     0
            wwn-0x5000cca24cccb370  ONLINE       0     0     0
(note, I haven't added the SLOG or L2ARC yet)


Performance
I haven't had time yet to go through in thorough detail but the when I configured all 8 drives in a basic zpool and did a very simplistic Linux "dd" write test (all zeros) I topped out at 1.2GB/sec in writes on a 60GB file (to surpass page caching). The reads topped out at 1.3GB/sec on the same file using the same "dd" test. I know this is far from real-world, but I wanted to see what it was capable of in the most optimal run.
Code:
sudo time sh -c "dd if=/dev/zero of=/dctestpool/outfile bs=64k count=900000"

I have iozone running more purposeful performance benchmarks. I'll post those details once I have them.



Example drive temperatures (during izone benchmark and 18+ hours of uptime)
Code:
doug@doug-X10SL7-F:~$ sudo hddtemp /dev/sd[cdefghij]
/dev/sdc: HGST HDN724040ALE640: 33°C
/dev/sdd: HGST HDN724040ALE640: 33°C
/dev/sde: HGST HDN724040ALE640: 34°C
/dev/sdf: HGST HDN724040ALE640: 35°C
/dev/sdg: HGST HDN724040ALE640: 33°C
/dev/sdh: HGST HDN724040ALE640: 34°C
/dev/sdi: HGST HDN724040ALE640: 34°C
/dev/sdj: HGST HDN724040ALE640: 33°C

Here is the album of images on imgur showing the build.
 
Top