ddrueding
Fixture
It shouldn't, but I've only actually done it with the Synology. I assume you intended to add a drive to an empty slot and accidentally pulled a full slot?
It shouldn't, but I've only actually done it with the Synology. I assume you intended to add a drive to an empty slot and accidentally pulled a full slot?
I goofed due to dim lights and now the array is rebuilding. :doh:
Does it really matter which drive slot is used if the change is done with the power off?
Were you trying to increase the capacity of the array by swapping in a larger drive?
With RAID 6 of 6 drives the failure rates should be fairly low. I think that could go up to 8 drives and still be fine. Maybe I'll get another drive when the capacity exceeds 80% or if the 8TB drives become cheap next year.
Sure, maybe so, but have you looked at the Unrecoverable Read Error rate of your 8TB drives? Some drives will quote 10^13 or 10^14 before a URE occurs. This may be very unlikely but at least consider it a possibility in your planning especially during the time of a rebuild when one of your drives fails in the array.
In addition to increasing the copy speed via the 10Gb ports, I obtained the QNAP for the 8 bays. I could only use RAID 5 with the first NAS (5 drives). Now I am using the RAD 6 (6 drives).
I will not add a hot spare if it is indeed running. I thought perhaps it could be in there and the NAS would any apply power as needed, but apparently that's not feasible. I don't have any empty helium drives to test it with.
I thought the point of using RAID 6 is that there could still be a failure during rebuild and the array would be OK.
I've done the rebuild twice with RAID 5 and now once with RAID 6. No errors have occurred.
The NAS drives are 10^14 and the primary data drives in the computer are 10^15.
I keep getting the error message from the NAS that there is a fan failure at startup, yet fans are both working. It's quite annoyingly beeping at us each time.
That does sound annoying. Someone should do something about it.
The fans are typically pretty easy to replace, though I've only had to do it once.
So, maybe it's just that they're not telling the NAS that they're working OK.The fans appear to be moving normally.
So, maybe it's just that they're not telling the NAS that they're working OK.
Since we badly needed more storage space at the office, I've ordered an HPE StoreEasy 1650 Expanded with an additionnal D3700 enclosure with SFF HDDs in it for a faster tier. I've also ordered another 1650 Expanded for the DR site.
Both units are far from filled and we shouldn't have to replace them for probably a decade since we'll be able to add plenty of drives when we'll be short on space.
It's substantially more expensive than even higher-end Synology NAS units like an RS3617xs+ (the entire setup was a bit less than 30K U$). But Synology doesn't offer NBD replacement in case of an in-warranty failure. At least not here AFAIK.
It will probably be loud, but not excindingly so as it is not a very powerful unit, processing-wise.How many GB/sec. and how many gajillion IOPS do you get? I bet it's loud.
What are the thoughts on the DS3617xs?
What are the thoughts on the DS3617xs?
Are there any 8-bay systems in that category with 10GbE SFF+ or at least a slot for a card?
This is a nonsense question. You're basically asking, "Which is more reliable compared to {software A}: {hardware A} or {software B on hardware B}."Which is more reliable compared to the Barfta FS, an individual drive or the ext4 FS RAID 6? I'm not concerned with outright failure, but unidentified data corruption.
This is a nonsense question. You're basically asking, "Which is more reliable compared to {software A}: {hardware A} or {software B on hardware B}."
If you want to compare benchmarks, we need to know both the software and the hardware setup that it's running on.
Also, I hope that you're talking about BTRFS. I have no idea what Barfta FS is....
As I understand it, BTRFS is the red-headed step-child at Oracle. It was what they were working on to compete with ZFS until they bought Sun. The only reason that BTRFS is moving forward is because Oracle won't release ZFS with a license compatible with the Linux GPL. There's probably a reason for that.
The consensus seems to be that, if you're running a server, ZFS on a native OS, like BSD, is superior due to maturity. If you insist on using Linux, ext4 isn't suitable for storage servers of any real size.
What are you doing that you have all of these requirements?
As I understand it, BTRFS is the red-headed step-child at Oracle. It was what they were working on to compete with ZFS until they bought Sun. The only reason that BTRFS is moving forward is because Oracle won't release ZFS with a license compatible with the Linux GPL. There's probably a reason for that.
The consensus seems to be that, if you're running a server, ZFS on a native OS, like BSD, is superior due to maturity. If you insist on using Linux, ext4 isn't suitable for storage servers of any real size.
What are you doing that you have all of these requirements?
I'm trying to reduce requirements to (1) performance >= a 10TB drive and (2) a reliable file system. ZFS really isn't an option due to my condition.
What's the condition zfs doesn't meet for you?
Powered by an Intel C2538 processor affected by the clock bug. Not interested in a system that will predicably fail in 3 years. No thanks.