Supermicro deal on Ebay

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
The PSU works. It's not unreasonably loud by itself, but the 15krpm fans are another story. I've decided I'm just going to rip the guts out and stick my i7-975 in there since there's basically no pleasant option for cooling anything that uses LGA771.

I'm going to try to populate the whole thing with 3TB Hitachi drives and set it up with FreeBSD.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
I have come to the conclusion that it's just not possible to get a 2U rack system to anything like quiet. I didn't mind so much when it was warm enough that I needed the fan in my bedroom running on high all night, but now that it's cooled off I can definitely hear that machine all the time in spite of all the things I did to make it as quiet as I could.

I love my 20-bay Norco cases.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
Cooling silently a CPU with a TDP of 130W is certainly hard in a 2U chassis. But a model with a TDP of 95W or lower should be feasible. 4x 80mm 1500rpm Nexus fans should be able to achieve that. Not enough for an i7 975, but probably ok for a SandyBridge or anything on an LGA1156.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
Scythe Big Shuriken and Prolimatech Samuel 17 with a low profile Scythe 1600rpm 120mm fan both fit inside a 2U chassis. With the same fan, the Big Shuriken is slightly better than the Prolimatech, but you can fit a full size, 25mm thick fan on the Prolimatech and it will beat the Big Shuriken in cooling efficiency. The Big Shuriken is 58mm high (so 60.5mm with a low profile fan) and the Samuel 17 is only 45mm high (so 60mm with a regular thickness fan). While I'm not sure the Big Shuriken could quietly cool an i7 975 with the low profile fan, I'm pretty confident that the Samuel 17 with a Thermalright TY-140 could do so. The Samuel 17 is ~35$-40$ and the Thermalright TY-140 cost between 13$ and 18$.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
I have a pair of Xeon E5320s, quad core 1.86GHz chips with a very reasonable TDP of 80W. They're meant to have passive heat sinks on them with cooling from massive 7k to 15k 80mm cube fans that are normally situated in the front of the case.

I replaced my i7 with the original Supermicro board I got and one of the E5320s. It actually is running acceptably with just the passive cooler, but CPU utilization for the rig is perhaps 1 -2 %. I don't think I'll leave the rig running like that, but I'm wondering if I could get away with undervolting those front fans to get a more workable cooling solution at an agreeable volume level.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
Yeah.

I did find a Thermaltake LGA771 cooler that operates at 30dbA and supposedly fits in a 2U enclosure, but I don't know if I want to spend $55 on it or if I'd be better off with different 80mm fans.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
The CL-P0303 is supposedly 45$, not 55$. In fact it's less than 40$ North of the border. The only way it's 55$ is if you order it alone and you include shipping, otherwise, shipping should be less than 10$. I can't believe none of your local suppliers sells it.

And the goal should be to put an active CPU cooler in order to replace the noisy 80mm fans by quieter models. Replacing the cooler and keeping the fans is pointless if you intend to silence it.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
... yeah. The one I could actually find was $55, which seems like a waste for a CPU that probably sells for about $15 these days. I was planning to get a nice Prolimatech HSF to stick on my game machine but there's something to be said for being able to use the 32TB of hard drives I have stuffed in that machine for actual data storage some day.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
  1. There's no place to put the pump/radiator inside a 2U enclosure.
  2. There's no hole to pass the cables from inside to outside the enclosure to operate the pump/radiator externally.
  3. I'm not aware of any water-cooling solution for the LGA771.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
4. Water Cooling is a complete pain in the ass and not something I want to use for a system I expect to run 24x7 for the next few years.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
Yes, the Thermaltake CLP0303 is indeed a quiet little guy. The power supply is still louder than I'd like, but the machine is substantially quieter than it was.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
I was pleasantly surprised that the server's onboard SATA ports do in fact recognize and properly address 3TB Hitachi drives. I have 11 of them in that machine now, and I've played with FreeNAS enough to be comfortable with it.

Growing a ZFS pool is kind of a monumental PITA at the moment. Supposedly the developers are working on a better way to do it, but for now it involves a hairy set of command line operations that have to be performed on every disk in the pool. And adding an entirely new drive to a pool isn't possible in FreeBSD/FreeNAS at all right now; you have to backup, delete and recreate the array. These are the sorts of capabilities that ZFS should have, but BSD's implementation at the moment does not.

I can't decide if I'd rather just load Scientific Linux or something on this thing and stick with Linux LVM or continue using FreeNAS and hope I see improvement. I don't really need this machine to do anything but contain hard drives and expose them to my LAN.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,719
Location
Horsens, Denmark
I'm thinking of playing with FreeNAS as well, but have much less Linux experience than you do. Is it straightforward enough? I'd love to use it as the storage foundation for a VMWare HA cluster, and it looks like people have made it work for this purpose; Handruin?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
It's ridiculously easy to get it up and running. There isn't even really an OS install. I just used dd to write the image file on to an 8GB SATA SSD I had sitting around. The documentation on the FreeNAS web site mentions a few VMware specific gotchas that are apparently pretty common. You're far from the first person to want an easy iSCSI target for doing stuff.

The only issue I ran into during my build was having to reflash my IBM SAS controller to non-RAID mode so I could address drives individually. Other than that, everything has worked just beautifully. I think LACP even worked out of the box.

I will say that overall FreeNAS performance does seem to scale with additional system RAM. It really does use whatever RAM you have available as disk cache.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,916
Location
USA
Last I used FreeNAS (back when I built my NAS over a year ago) it had the slowest performance of any NAS-based software I tried. This is why I did not decide to use FreeNAS and which is why I stuck with OpenFiler. It's quite possible they've addressed some of the performance issues in newer versions since I went down the path of testing the variety of them. I used the Windows Home Server, Server 2008 storage server, OpenFiler, and I forget what the last one was...I think it may just have been Server 2003 or 2008. All of which would barely ever break 50 MB/sec in my sustained large monolithic ISO transfer testing.

At the time, OpenFiler (64-bit) was the only one to make use of the 6GB of RAM in my NAS for file caching. From what Mercutio describes, it now sounds like FreeNAS is also offering the same functionality.

Despite what other opinions online suggested, I found it easier to manage and provision storage using the OpenFiler web-based interface when compared to FreeNAS, but many others have reported the opposite. I don't know why I found OpenFiler easier to use, but I have no way to suggest that it would be better or worse for you to try. It's not that FreeNAS was hard to use, I just thought that OpenFiler was more intuitive (relatively speaking).

As for using a software tool like FreeNAS as an iSCSI target for VMWare HA implementation, it should be possible. I do know of people using OpenFiler which is another reason why I chose to use it. I never did build my home ESX servers to begin testing with this, but it coming in the next couple months. This video should give you a basic rundown of how to configure the iSCSI (assuming this is the path you want to go) when using FreeNAS with ESXi. I'm not 100% certain if FreeNAS will allow multiple host to discover, but you can just repeat the discovery step in the VI console on each ESXi machine for the same iSCSI LUN. With that, you now have a proper shared storage between two ESXi hosts and will need to proceed with the vmotion network configuration before you can do an actual vmotion or enable HA. If you get stuck with that setup, let me know and I'll help.

Now that I've seen the support, update path, and proper implementation of iSCSI of OpenFiler to be fairly non-existent, I'm going to go through the same testing once again after I acquire a decent Intel NIC for my NAS. There is a newer version of OpenFiler and also it may seem to be a newer version of FreeNAS. I also want to demo the newer Windows Home Server and find which one offers the best performance.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
I can transfer large files at >60MB/s between two Linux machines over NFS. It's crazy that the NAS software you tried had worse performance.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,916
Location
USA
I can transfer large files at >60MB/s between two Linux machines over NFS. It's crazy that the NAS software you tried had worse performance.

I'm able to get over 100MB/sec+ with OpenFiler when transferring from my Win 7 system, but the other ones topped out for some reason while using the exact same hardware. I wasn't using NFS, I guess it was CIFS/SMB? Maybe that makes a difference? I need to go through and run some more consistent tests with the latest versions of the different software and even throw in a basic install of Linux to see if that changes things.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
In normal circumstances, NFS should be faster than SMB. I've run into cases where it isn't, but in my experience it generally is.

I'm not normally worried about transfer rates except while I'm doing bulk copies, but everything important on my LAN supports dual gigabit connections (Intel Pro/1000 adapters, Netgear Prosafe switch), which I suspect makes up for the shortcomings of a problematic implementation.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
Growing a ZFS pool is kind of a monumental PITA at the moment. Supposedly the developers are working on a better way to do it, but for now it involves a hairy set of command line operations that have to be performed on every disk in the pool. And adding an entirely new drive to a pool isn't possible in FreeBSD/FreeNAS at all right now; you have to backup, delete and recreate the array. These are the sorts of capabilities that ZFS should have, but BSD's implementation at the moment does not.
Have you looked at using Nexenta?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
I have misgivings about ongoing support for OpenSolaris, which makes me shy away from anything that relies on it. Most of the features of ZFS that BSD is lacking are either on their development roadmap for the reasonably near future or not available in an open implementation in the first place.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,348
Location
Gold Coast Hinterland, Australia
I would have to agree with Sechs, and give Nexenta a second look, if you don't want to run Solaris 11.

I certainly agree that Oracle has neutered further development of the several OpenSolaris based OSes with it's new policy on source code releases (which are still coming out, but later than normal and more spread out), but I'm sure there is enough of a community around illiumos and Nexenta that they will be supported in the future.

I'm now running Solaris 11 fulltime on my desktop, so if you have any questions about Solaris 11 in particular I can offer help in answering them. (Timeslider is absolutely awesome for the desktop, and boot environments certainly have merit in the enterprise market).

PS. Solaris 11 itself can be used by home users at no cost, in line with the Oracle end user license agreement.

FYI, for those that don't know how boot environments work, they use the ZFS snapshot system to take a snashot of the current working environment, so that in the event that the current environment goes crap for whatever reason, you can reboot into the nominated boot environment to effectively roll back any system changes. The following link shows a nice example of this:
http://www.c0t0d0s0.org/archives/7409-Nice-example-for-the-power-of-boot-environments.html
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
I'm weighing the possibility of finding community support from the *BSD crowd vs. Solaris, especially Solaris in the hands of Oracle, and to me it seems like going with Oracle would be betting on the wrong horse. Nexenta does look interesting, but there are a lot of open distributions of one sort or another that look interesting for this sort of thing.

On the other hand, I guess it wouldn't be that big of a deal for me to pull my current drives, slap in a bunch of 1 or 1.5TB drives and give Nexenta a test drive on the same hardware.

ZFS is... not fast (I think it's writing data at about 30MB/s over NFS). Not that I expected it to be. But maybe it's faster on Solaris.

At any rate, speaking to the issue of noise, I did manage to find a couple 80mm Noctua-branded fans that I'm going to stick in that machine next chance I get. It's been fine without any cooling other than the CPU fan, but I suspect more airflow, even if it's fairly low speed, isn't going to hurt anything.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,348
Location
Gold Coast Hinterland, Australia
Yep, the Oracle support forums are very lack-lustre, especially compared to those of the OpenSolaris support forums of yester-year, where it was common to converse directly with the people actually writing the code.

Regarding the performance, I'm getting 80-90MB/s reads/writes via NFS to a single 7200rpm SATA drive. (Server is Solaris 11 and client is Arch Linux). However I must acknowledge that on Solaris 11 the NFS and CIFS servers are kernel modules, as opposed to FreeBSD which AFAIK are user daemons.

FYI. Solaris 11 does NOT use SAMBA for the SMB/CIFS server, instead Sun wrote their own SMB/CIFS server that has easy to use integration with Active Directory, including mapping with Windows users, and direct mapping with POSIX ACLs.

AFAIK for NFS, Solaris 11's NFS services are now kernel modules attached directly to the ZFS kernel module, so you get a decent performance speed up there as well.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
You've filled 15TB within 3.5 months??? I thought that at some point, someone would end up being bored watching asses and tits.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
It's not as much as it sounds. My file servers were over capacity for months, so the excess was spread out to arrays on some of my desktops. The new arrays just gave me a place to centralize all my storage again and now that everything is copied back, I can see that 15TB arrays aren't really enough. I'm probably going to have to set up a SAS expander and rebuild with something more like 21 or 24TB/array.

The biggest single offender is non-porn videos. BD rips, at 5 - 15GB per title, are pretty rough on available storage.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,232
Location
I am omnipresent
When drives get reasonable again, I'll buy an Intel RES2SV240 ($200) and stick it in a secondary chassis. That Expander is specifically compatible with the LSI9240/IBM M1505, supports 24 drives @ 6Gbps SAS and can be installed in a 2U enclosure.

Though from what I've read, forecasts say that drives won't be "reasonable" again until probably 2013.
 
Top