FreeNAS

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
Anyone here using FreeNAS?

The ol' ReadyNAS is getting long in the tooth. In order to use larger drives, I will need to back-up and then restore all of my data, so that I can use a newer version of the file system with larger blocks. At that point, I might as well dump it for something more modern, and gain back the space used by its system image and incorrectly working recycle bin system.

I'm looking to pick up a HP Proliant N40L and using FreeNAS for a semi-homemade server. I'm planning on using ZFS.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I do use FreeNAS. It's been completely plug and play. I have it set up as two six-drive zpools of 18TB each, which works just fine for my needs. SMB performance is a little weak. NFS works a lot better. Other than that I don't have a whole lot to say. It's an appliance and I treat it as one.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
Been using 0.7 for years. CIFS/SMB needed a little work to get good performance out of, but I get 80+ MB/s (big B not little b) on large files, which is probably getting close to saturating a GigE network link. For the last twelve months or so, this shouldn't be an issue either - the default options work better out of the box.

I only use zfs at work, not on a Freenas box. Memory, memory, memory. You want to go 8GB in the Microserver at least.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
Merc, which version are you currently using?

Unfortunately, most of the computers accessing the shares will be Windows boxes, so I'm stuck with SMB. I'm not worried about throughput so much as access time.

Memory prices being what they are, it's a no-brainer to max out the Microserver with 8GB. Due to the vagueries of using all four DIMM slots on my desktop, I'm planning on buying new memory for it, and putting the old sticks in the server.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
The install is just writing out a disk image. It's ridiculously straightforward. Mine runs on an 8GB SSD. Performance seems to depend on available RAM more than anything else. The support documents I've read seem to suggest that an individual zpool should have four or five drives in it. Mine have six and seem to work just fine.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Nope. I figure the 16GB RAM in that machine will do it just fine.
I still have open DIMM slots too now that I think of it. Damn.

Load averages most of the time on that box (4 cores @ 1.8GHz) are pretty consistently around .1, jumping up to .8 or so when I'm hitting it from three or four systems over NFS.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
There seems to be quite a bit of talk about reducing or eliminating the swap reserve on each drive. I doubt that you have ever hit the swap, and it seems rare that anybody does.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
I've been tinkering with running FreeNAS in a VM for a while - ESXi 5. But I cannot get the throuput I want.
As a standalone Athlon 64 2.4GHz (754), 1GB DDR 400, NVIDIA Gigabit NIC, Samsung 2TB 5400rpm drive, I get 40+ MB/s writes and 80+ MB/s reads.

On either and Athlon X2 4000+ (2.1GHz) 3GB DDR 667 (1GB dedicated to VM) 1TB Hitachi 7200.D with either Realtek PCI-E or Intel PCI gigabit NIC, I can only manage, ~10MB/s writes and 38+ MB/s reads. Thinking that the machine might be an issue (esxcfg-info indicated that some of the virtualisation features were not active), I ran on an Athlon X4 635 with DDR3 and got the same numbers.

I also tried assigning different NICs, which made no appreciable difference. Is the VM overhead that large?
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,357
Location
Gold Coast Hinterland, Australia
Is the VM overhead that large?
Does the VM have a dedicated Disk, or is the guest HDD a file in another file system. IIRC, VMWare a few years ago (mid-2000's) had performance issues with HDD IOPS when the guest OS HDD was a file within the Host OS file system, and not a dedicated disk. I would have thought that had been fixed since, but may still be present in some configurations.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
That performance hit is in line with what I encountered even with much faster hardware. If pure IO is what you need, VM will not help the situation.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
That performance hit is in line with what I encountered even with much faster hardware. If pure IO is what you need, VM will not help the situation.

Same here. My tests still show about 50% less disk I/O throughput. This is why I think Virtual Box is a POS - disk I/O is at least 3 times worse again ...

It's a latency issue, so I assume it would be wise to make sure your VMs have plenty of memory if they're going to do anything useful (sorry Handruin).
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
It's a latency issue, so I assume it would be wise to make sure your VMs have plenty of memory if they're going to do anything useful (sorry Handruin).

I think it is further downstream than that. Even giving the host 20GB+ of RAM and making the drives SSDs doesn't change things much.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
Does the VM have a dedicated Disk, or is the guest HDD a file in another file system. IIRC, VMWare a few years ago (mid-2000's) had performance issues with HDD IOPS when the guest OS HDD was a file within the Host OS file system, and not a dedicated disk. I would have thought that had been fixed since, but may still be present in some configurations.

ESXI and guest OSes are installed on one disk. The FreeNAS storage as on a separate disk, but it's allocated to the FreeNAS guest as ESXi virtual disk. Is there a better way to do this?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
Same here. My tests still show about 50% less disk I/O throughput. This is why I think Virtual Box is a POS - disk I/O is at least 3 times worse again ...

It's a latency issue, so I assume it would be wise to make sure your VMs have plenty of memory if they're going to do anything useful (sorry Handruin).

Could it be that the base OS for FreeNAS doesn't have a properly-virtualized NIC or the VMware tools aren't there to help optimize the VM? I'm guessing under VMware, the NIC is an E1000? I don't take it personally if you feel like VMware isn't functioning for your needs. :)
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
Could it be that the base OS for FreeNAS doesn't have a properly-virtualized NIC or the VMware tools aren't there to help optimize the VM? I'm guessing under VMware, the NIC is an E1000? I don't take it personally if you feel like VMware isn't functioning for your needs. :)

Yup, the NIC is type E1000 - I don't think VMXNET3 is supported under FreeBSD64 clients (FreeNAS). I managed to get vmware tools installed on FreeNAS, and that was a chore :) - I should write that procedure up - but it made no difference to the throughput.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I had all kinds of goofy problems running FreeNAS in a VM. There just isn't a lot of push to optimize, let alone fix bugs, for a virtual environment.

Runs great on real hardware.
 
Top