NAS Drive

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
The ZFS On Linux project is more mature and stable than you may recognize and is perfectly suitable for Linux. Installing it under Ubuntu server 16.04 has never been easier. We have many years of enterprise products built on zfs and CentOS and I work directly with it almost every day. You can also create thin-provisioned zfs pools and carve out zvol to which you can put any filesystem on top if there are some atypical requirements.
I don't question the stability of ZFS. Everybody but Oracle is working from the same code. On the other hand, ZFS does not run natively on Linux because you can't include it in the kernel.

As I understand it, Ubuntu implements ZFS using FUSE. So, it runs in user space, which is generally non-awesome.

If you're running a file server and specifically choosing ZFS, I am baffled why you would choose an operating system that doesn't use the file system natively.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
I don't question the stability of ZFS. Everybody but Oracle is working from the same code. On the other hand, ZFS does not run natively on Linux because you can't include it in the kernel.

As I understand it, Ubuntu implements ZFS using FUSE. So, it runs in user space, which is generally non-awesome.

If you're running a file server and specifically choosing ZFS, I am baffled why you would choose an operating system that doesn't use the file system natively.

I'm not aware of Ubuntu implementing ZFS using FUSE; it's a kernel module. FUSE has been deprecated but you can still use it if you desire for some reason. My Ubuntu Server 16 is using the kernel module for zfs and that's how it installed by default. I'm not using ZFS under FUSE and anyone else who installs using the basic docs will also get ZFS as a kernel mod.

What reason is there to have ZFS natively as part of the OS? I'm not understanding your reasoning for needing zfs to be native to the OS? The ZoL project allows ZFS to function and perform perfectly well as a kernel mod on any OS they support even when it's not native.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm not aware of Ubuntu implementing ZFS using FUSE; it's a kernel module. FUSE has been deprecated but you can still use it if you desire for some reason. My Ubuntu Server 16 is using the kernel module for zfs and that's how it installed by default. I'm not using ZFS under FUSE and anyone else who installs using the basic docs will also get ZFS as a kernel mod.

What reason is there to have ZFS natively as part of the OS? I'm not understanding your reasoning for needing zfs to be native to the OS? The ZoL project allows ZFS to function and perform perfectly well as a kernel mod on any OS they support even when it's not native.
After furthur research, it appears that Cannonical is simply violating the license for Linux. It seems like the GPL folks are up in arms about it because this could make it possible to to combine GPL and non-GPL code, i.e. proprietary code that's not open. It totally defeats the point of the license.

The problem isn't ZoL, it's where the fire system runs. If it's run in user space, it's less secure. Basic Unix security model stuff.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
After furthur research, it appears that Cannonical is simply violating the license for Linux. It seems like the GPL folks are up in arms about it because this could make it possible to to combine GPL and non-GPL code, i.e. proprietary code that's not open. It totally defeats the point of the license.

The problem isn't ZoL, it's where the fire system runs. If it's run in user space, it's less secure. Basic Unix security model stuff.

I haven't looked into the licensing but that does seem sketchy if they're doing that.

The ZoL components running in Ubuntu (or any distro) are not running in user space. It is not FUSE. It's a kernel module.

Code:
[root@cds0 ~]# lsmod | grep zfs
zfs                  1283041  10
zcommon                47249  1 zfs
znvpair                80541  2 zfs,zcommon
zavl                    6925  1 zfs
zunicode              323159  1 zfs
spl                   269615  5 zfs,zcommon,znvpair,zavl,zunicode
[root@cds0 ~]# modinfo zfs
filename:       /lib/modules/2.6.32-696.1.1.el6.x86_64/extra/zfs/zfs/zfs.ko
version:        0.6.3-81_g6f8ae72
license:        CDDL
author:         Sun Microsystems/Oracle, Lawrence Livermore National Laboratory
description:    ZFS
srcversion:     4D3F6D1F2C4996062D8A2E8
depends:        spl,znvpair,zcommon,zunicode,zavl
vermagic:       2.6.32-696.1.1.el6.x86_64 SMP mod_unload modversions
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
There's nothing wrong with non-GPL kernel modules AFAIK.
I'm not a lawyer, and I don't play one on TV...

But, as I understand it, anyone can compile whatever they want into their own Linux kernel, but you can't modify the code (without contributing it back under the GPL) and you can't distribute it.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
I'm not a lawyer, and I don't play one on TV...

But, as I understand it, anyone can compile whatever they want into their own Linux kernel, but you can't modify the code (without contributing it back under the GPL) and you can't distribute it.

A Loadable Kernel Module isn't compiled into the Kernel based on my understanding.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
A Loadable Kernel Module isn't compiled into the Kernel based on my understanding.
This is beyond my pay grade, but Debian's work around for the licensing issue is to not include ZFS on precompiled kernels and to only distribute ZoL as contributed source code.

http://blog.halon.org.uk/2016/01/on-zfs-in-debian/

If this approach is legitimate, I'm surprised that none of the compile-on-install distros, like Gentoo, haven't already done something like this.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
This is beyond my pay grade, but Debian's work around for the licensing issue is to not include ZFS on precompiled kernels and to only distribute ZoL as contributed source code.

http://blog.halon.org.uk/2016/01/on-zfs-in-debian/

If this approach is legitimate, I'm surprised that none of the compile-on-install distros, like Gentoo, haven't already done something like this.

I see this as no different than providing Nvidia drivers as DKMS modules. Nvidia drivers are proprietary,
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I recall in days of yore there were dire warnings about mismatched drives in the RAID. Does it matter much nowadays if a new drive of another model/RPM is added to a NAS so long as they are all NAS/enterprise grade drives?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
I prefer to keep them the same but I don't think it matters as much if you're a single consumer/user of the storage array. There is the possibility that you'll be limited to the performance of the slowest drive.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I prefer to keep them the same but I don't think it matters as much if you're a single consumer/user of the storage array. There is the possibility that you'll be limited to the performance of the slowest drive.

I started with the Reds and was thinking about adding the Golds or similar Seagate 7200 RPM drives.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Go for helium-filled hard drives. Every report I've read underlines their superior reliability. HGST has exemplary failure numbers, but the Seagate helium-filled drives are supposed to be quite good too. I've read nothing so far regarding reliability about the WD Gold.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
A year ago the Reds were the only reasonable helium 8TB drives for my NADS (6 drives in RAID 6). At the time the HGST He8 did not seem like a good option as the 10TB Enterprise Seagates were newer technology, but those four are being used as primary drives at least until January. From what I could see there are no 8TB helium Seagate drives yet. :( I could just buy a couple more Reds but figured that a couple of 7200 RPM drives would have some future life. I have no idea why the Gold is cheaper than the Red Pro, but it has the higher durability/reliability specs of the HGST and Seagate enterprise drives.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Go for helium-filled hard drives. Every report I've read underlines their superior reliability. HGST has exemplary failure numbers, but the Seagate helium-filled drives are supposed to be quite good too. I've read nothing so far regarding reliability about the WD Gold.
They all seem rather expensive. Like $400+ for a 10TB Enterprise grade one.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
The 3 last year were 1700. I'm only buying two 8TB to fill the NAS now.

I'm going to buy a good new NAS and put the 4 10TB and a couple more in it next year.
That will be $3K for the Synolog y without drives.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm clearly the odd-man out in getting more spindles of consumer-grade drives, and just expecting them to fail on the regular....
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
I'm similar to you. I buy consumer grade drives and just expect they will fail.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Is there a 10TB consumer grade drive that isn't shingled?

That would be the Barracuda Pro. All the 10TB Seagate drives from the basic NAS model (cheapest of all) to the Enterprise are within a fairly small price range.
I really don't understand the pricing structure, but the drives probably have many common parts.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
That would be the Barracuda Pro. All the 10TB Seagate drives from the basic NAS model (cheapest of all) to the Enterprise are within a fairly small price range.
I really don't understand the pricing structure, but the drives probably have many common parts.
I've seen the SAS Enterprise version for $420 too. I don't see why someone would try to save $20 and get a significantly lower MTBF and a slightly slower drive. So they give you data recovery. Big deal... I'm sure that doesn't help someone using them in RAID. It's not like you're paying double to go Enterprise.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Just ordered a stack of the 10TB Seagate Enterprise drives. We'll see how they do.

Mine are still fine after nearly a year. I would think the current production would have worked out any bugs if there were any.
Are they for work or your personal NAS?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Paid $406 each for 4x ST10000NM0016 - still waiting on one. Will be going into a Synology DS1517+ along with 16GB of RAM, X540T2, and a 1.2TB S3500. This will be to play with iSCSI hosting of ESXi VMs for vMotion and possibly HA along with normal fileserver duties.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Paid $406 each for 4x ST10000NM0016 - still waiting on one. Will be going into a Synology DS1517+ along with 16GB of RAM, X540T2, and a 1.2TB S3500. This will be to play with iSCSI hosting of ESXi VMs for vMotion and possibly HA along with normal fileserver duties.

I'll be interested in your experience with the drives in the DS1517+.
IIRC two SSDs are needed for read/write caching, so you would only be caching reads with the one SSD. I'm curious if it helps much considering the CPU used.
The M.2 adapter and X540T2 cannot be both used since the DS1517+ has only one PCIe slot.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
I'll be interested in your experience with the drives in the DS1517+.
IIRC two SSDs are needed for read/write caching, so you would only be caching reads with the one SSD. I'm curious if it helps much considering the CPU used.
The M.2 adapter and X540T2 cannot be both used since the DS1517+ has only one PCIe slot.

Not the fastest, but good for testing. The smallest unit that ticks all the feature boxes I was looking for. I went with 4x 10GB to fit a normal SATA SSD because the NIC and M.2 couldn't fit at the same time. Shame about that.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
When it is 40GB of raw storage ;)

My current 8-drive array at home has less capacity, even after redundancy is subtracted out.
Lightweight!

What redundancy are you using this time around? You're only after 20TB?

I was thinking of something like 10 of them in RAID-6, as an array upgrade to my server (though I don't need more space). I'd move the current 10x6TB RAID-6 array into the still to be assembled "new"/old mishmash backup server, but that would be some time in the future when the drives are a little cheaper and I've got more disposable income.

I've got all the parts but the new drives. The motherboard from the old server. A Q9550S CPU from eBay, an extra set of the same SAS RAID card and SAS expander that are in the main sever. RAM from the old server, 10gig Ethernet card, even have a used 4U 20 bay Norco enclosure. I suppose I should put it together with the currently unused old 8x2TB drives and make sure it all works and plays nice first.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Yup, just 20TB. Mostly running the storage for the 3 VMs that will be on a pair of ESXi machines using all the redundancy options, some basic file server stuff, and storage for the security cameras and call recording package. Only 5 concurrent users at that location, but they make ~$11M/year so some significant expense is worthwhile to keep them operating.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Do you expect the DS1517+ to be much better than the QNAP TS-831X?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Not really, but at this point I am familiar enough with Synology products that going that route will simplify the deployment process and allow me to get to testing more quickly. The TS-831X has more bays, but the feature set seems similar.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA

Clocker

Storage? I am Storage!
Joined
Jan 14, 2002
Messages
3,554
Location
USA
I bought two and they are working great. The instructions for removing the drives from the enclosures make it super easy. Done in just a couple minutes.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I bought two and they are working great. The instructions for removing the drives from the enclosures make it super easy. Done in just a couple minutes.

Is that actually a RED drive or a white label drive without the NASware firmware?
 
Top