NAS Drive

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
It shouldn't, but I've only actually done it with the Synology. I assume you intended to add a drive to an empty slot and accidentally pulled a full slot?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
It shouldn't, but I've only actually done it with the Synology. I assume you intended to add a drive to an empty slot and accidentally pulled a full slot?

I thought the power was off. The Synology LEDs are fine whereas QNAP 831X LEDs are ridiculously dim. The better models have a display panel that would make the power state more obvious, but LEDs should be reasonably bright. None of them are designed for the bedroom at night. :)
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Were you trying to increase the capacity of the array by swapping in a larger drive?

No, I just bought 94TB in 8TB and 10TB drives in the past two months. I was going to move the drives and add a new one to the empty slot, but thought the power was off.
Since I will not be using the hot spare a single will be used for quick incrementals. I'll add another 8TB drive to the RAID 6 in December or early 2017.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
With RAID 6 of 6 drives the failure rates should be fairly low. I think that could go up to 8 drives and still be fine. Maybe I'll get another drive when the capacity exceeds 80% or if the 8TB drives become cheap next year.

Sure, maybe so, but have you looked at the Unrecoverable Read Error rate of your 8TB drives? Some drives will quote 10^13 or 10^14 before a URE occurs. This may be very unlikely but at least consider it a possibility in your planning especially during the time of a rebuild when one of your drives fails in the array.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Sure, maybe so, but have you looked at the Unrecoverable Read Error rate of your 8TB drives? Some drives will quote 10^13 or 10^14 before a URE occurs. This may be very unlikely but at least consider it a possibility in your planning especially during the time of a rebuild when one of your drives fails in the array.

I thought the point of using RAID 6 is that there could still be a failure during rebuild and the array would be OK.
I've done the rebuild twice with RAID 5 and now once with RAID 6. No errors have occurred.
The NAS drives are 10^14 and the primary data drives in the computer are 10^15.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
In addition to increasing the copy speed via the 10Gb ports, I obtained the QNAP for the 8 bays. I could only use RAID 5 with the first NAS (5 drives). Now I am using the RAD 6 (6 drives).
I will not add a hot spare if it is indeed running. I thought perhaps it could be in there and the NAS would any apply power as needed, but apparently that's not feasible. I don't have any empty helium drives to test it with.

I thought the point of using RAID 6 is that there could still be a failure during rebuild and the array would be OK.
I've done the rebuild twice with RAID 5 and now once with RAID 6. No errors have occurred.
The NAS drives are 10^14 and the primary data drives in the computer are 10^15.

You're correct, you can have two failures during a rebuild with RAID 6 and be ok. I meant it in relation to your earlier post about running with RAID 5.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I keep getting the error message from the NAS that there is a fan failure at startup, yet fans are both working. It's quite annoyingly beeping at us each time. :(
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Since we badly needed more storage space at the office, I've ordered an HPE StoreEasy 1650 Expanded with an additionnal D3700 enclosure with SFF HDDs in it for a faster tier. I've also ordered another 1650 Expanded for the DR site.

Both units are far from filled and we shouldn't have to replace them for probably a decade since we'll be able to add plenty of drives when we'll be short on space.

It's substantially more expensive than even higher-end Synology NAS units like an RS3617xs+ (the entire setup was a bit less than 30K U$). But Synology doesn't offer NBD replacement in case of an in-warranty failure. At least not here AFAIK.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Since we badly needed more storage space at the office, I've ordered an HPE StoreEasy 1650 Expanded with an additionnal D3700 enclosure with SFF HDDs in it for a faster tier. I've also ordered another 1650 Expanded for the DR site.

Both units are far from filled and we shouldn't have to replace them for probably a decade since we'll be able to add plenty of drives when we'll be short on space.

It's substantially more expensive than even higher-end Synology NAS units like an RS3617xs+ (the entire setup was a bit less than 30K U$). But Synology doesn't offer NBD replacement in case of an in-warranty failure. At least not here AFAIK.

How many GB/sec. and how many gajillion IOPS do you get? I bet it's loud. :lol:
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
How many GB/sec. and how many gajillion IOPS do you get? I bet it's loud.
It will probably be loud, but not excindingly so as it is not a very powerful unit, processing-wise.

It will be connected with 10G SFP+ ports and it will be using mechanical drives, so I don't expect to be able to top 800MB/s in sequential transfers. Random iops will probably be between ~400 and ~1200, depending on the tier being accessed. The only SSDs are the boot drives in RAID 1.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
What are the thoughts on the DS3617xs?

Tough to answer without knowing your requirements. For me, it's not worth the money. I spent 1/3rd the amount and have twice as many hot-swap bays and multiple times over on the amount of compute and RAM capacity without including the hard drives. You could do something similar and load FreeNAS on a system to manage your storage and network access if you're not comfortable administering it yourself.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I have the 831X, but it is only good for backups and has that stupid fan error. I was hoping that a NADS like the DS3617xs would be faster and better than single drives I'm using now. I like the idea of the Barfta file system.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Why not just build a PC with a bunch of hotswap bays and put FreeNAS on it as the OS and manage it all through the web and be done? Then if you get fan/hardware failures, you can replace parts with off-the-shelf components that you're familiar with? You can add 10Gb cards if you need more bandwidth and you can use ZFS under the covers to get CoW snapshots, bitrot protection, and expandability. My question to you would be, what about the Synology is appealing vs just building a computer with storage?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
What are the thoughts on the DS3617xs?

When it's not rackmount, it's not even on my radar.

It's probably quite fast for a NAS and the processing power is sufficient to run a few VM out of it. It doesn't come with 10Gb SFP+ ports, but it's understandable since the targeted audience probably wouldn't know what to do with those and you can add a card with 10G ports afterward.

To answer Handruin's question, a Synology comes with an exhaustive app environment, so you get a ton of functionalities on top of the hardware and you don't have to spend much time configuring it. That's quite appealing to most people.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
It is faster to throw some drives in a NAS and share them. It is also faster to enable features that the NAS supports. Of course, once you exceed the capabilities of a NAS, you are SOL. A PC-based solution can scale significantly.

[video=youtube_share;uykMPICGeqw]https://youtu.be/uykMPICGeqw[/video]
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I tried to make the FreeNAS work, but it was not liking the 10GB card. As they say "I'm too old for this shit" and would rather buy an integrated system.
I really don't need 12 bays, but I want the Barfta or other robust file system with the snapshot capability. I also want a good CPU, not the Armenian type.
Are there any 8-bay systems in that category with 10GbE SFF+ or at least a slot for a card?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Are there any 8-bay systems in that category with 10GbE SFF+ or at least a slot for a card?

QNAP TVS-1282: 2600$ at amazon.com.

CPU is a Core i5 6500, there's 16GB of RAM and you can use as much as 4 SSD for caching. You'll have to add a 10G (or 40G) network card. The compatibility list is here.

A dual-port QNAP 10G SFP+ card cost 383U$.
 
Last edited:

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Which is more reliable compared to the Barfta FS, an individual drive or the ext4 FS RAID 6? I'm not concerned with outright failure, but unidentified data corruption.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
Which is more reliable compared to the Barfta FS, an individual drive or the ext4 FS RAID 6? I'm not concerned with outright failure, but unidentified data corruption.
This is a nonsense question. You're basically asking, "Which is more reliable compared to {software A}: {hardware A} or {software B on hardware B}."

If you want to compare benchmarks, we need to know both the software and the hardware setup that it's running on.

Also, I hope that you're talking about BTRFS. I have no idea what Barfta FS is....
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
This is a nonsense question. You're basically asking, "Which is more reliable compared to {software A}: {hardware A} or {software B on hardware B}."

If you want to compare benchmarks, we need to know both the software and the hardware setup that it's running on.

Also, I hope that you're talking about BTRFS. I have no idea what Barfta FS is....

Sorry, the BTRFS of Synology. Is that or the ZFS not really so necessary for data integrity or is ext 4 sufficient?
I did not mean to introduce two-variables together. Performance needs to be at least as good as a single 10TB drive.
My QNAP 831X has decent sustained transfer rates, so that is not the issue. IOps or some latency issue may be the problem although files are 20+ MB.
I assume that the wimpy CPU is a limitation, but if there is some inherent limitation of the 10GbE and/or Windows then any NAS may be useless to me.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
As I understand it, BTRFS is the red-headed step-child at Oracle. It was what they were working on to compete with ZFS until they bought Sun. The only reason that BTRFS is moving forward is because Oracle won't release ZFS with a license compatible with the Linux GPL. There's probably a reason for that.

The consensus seems to be that, if you're running a server, ZFS on a native OS, like BSD, is superior due to maturity. If you insist on using Linux, ext4 isn't suitable for storage servers of any real size.

What are you doing that you have all of these requirements?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
As I understand it, BTRFS is the red-headed step-child at Oracle. It was what they were working on to compete with ZFS until they bought Sun. The only reason that BTRFS is moving forward is because Oracle won't release ZFS with a license compatible with the Linux GPL. There's probably a reason for that.

The consensus seems to be that, if you're running a server, ZFS on a native OS, like BSD, is superior due to maturity. If you insist on using Linux, ext4 isn't suitable for storage servers of any real size.

What are you doing that you have all of these requirements?

The ZFS On Linux project is more mature and stable than you may recognize and is perfectly suitable for Linux. Installing it under Ubuntu server 16.04 has never been easier. We have many years of enterprise products built on zfs and CentOS and I work directly with it almost every day. You can also create thin-provisioned zfs pools and carve out zvol to which you can put any filesystem on top if there are some atypical requirements.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
As I understand it, BTRFS is the red-headed step-child at Oracle. It was what they were working on to compete with ZFS until they bought Sun. The only reason that BTRFS is moving forward is because Oracle won't release ZFS with a license compatible with the Linux GPL. There's probably a reason for that.

The consensus seems to be that, if you're running a server, ZFS on a native OS, like BSD, is superior due to maturity. If you insist on using Linux, ext4 isn't suitable for storage servers of any real size.

What are you doing that you have all of these requirements?

I'm trying to reduce requirements to (1) performance >= a 10TB drive and (2) a reliable file system. ZFS really isn't an option due to my condition.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
What's the condition zfs doesn't meet for you?

Unless I misunderstood that is not available commercially other than a few designs from one company, iXsystems. Their 8-bay appears to be discontinued, so it is really not an option.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
There is now a new Synology NAS, the DS1817+.
Finally there is a Synology with a PCIs slot for a 10GbE adapter. I cannot find a price, but I'm expecting it to be reasonable, though the card raises the total.
What is the relative performance of the old quad-core Atomic CPU compared to the ARM CPUs?
 
Top