question Where'd the X79 motherboards go?

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
Well, thanks to SD I just ordered parts for two dual socket E5-2670 servers with 128GB RAM each. I hate you guys and your good deals. Mercutio, now I have my semi-crazy ESXi farm in the works. On to selecting some storage. Eeek what did I just do....
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
What sort of storage are you looking for? SSD's and/or spinners? What sizes and how many?

I was aiming for 4 x 512GB SSDs just not sure which ones I want to get. I don't need a ton of space I just want it to be zippy for all the VMs. Maybe I'll toss in one or two 4TB HDDs but for any bulk storage needs I'll mount an NFS share or iSCSI LUN to my NAS. What about you?
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I was thinking a SSD for boot / OS and maybe one or two spinners. I may even use an 80GB Intel G2 SSD or similar I have lying around. I could probably go pure network for the video compression jobs, but that requires I leave another box with the files on the whole time it's working since I sure wasn't planning to use my large RAID-6 array as a scratch/working drive. So, local storage is probably best. The 4TB 7200 RPM Toshiba's at B&H, now $112, are hard to beat IMHO. They've got great dollar per TB numbers. If I could get a similar 2TB for half the price I'd do that, but that's not the price of 2TB 7200 RPM drive...

I really need to scrounge around and see what I have laying around. I know I have a 2TB and 3TB 7200RPM lying around that are spares for RAID-1 arrays in my i7 systems, but I don't really want to use those since they're literally spares. I do have a 500GB 7200 RPM Seagate that came out of another system I could use. I definitely don't plan on making two RAID-1 arrays for it like I do in my v1 E5 workstation that's slowly coming together.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
What you really need is for your NAS to support iSCSI and those 10Gb connections we were talking about...:diablo:
Yes, that'd be wonderful if I wanted to use my large RAID-6 array as a scratch disk for other machines. However, I don't.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
What you really need is for your NAS to support iSCSI and those 10Gb connections we were talking about...:diablo:

This is what I might eventually do. I'm likely going to restructure my network to include some localized 10Gb and consider using iSCSI.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
This is what I might eventually do. I'm likely going to restructure my network to include some localized 10Gb and consider using iSCSI.
It's not practical for me to run fiber all over my house, or at least I don't think it is. Can you buy a reel of OM3 fiber and run it like CAT5/6 and terminate it to a wall plate and then use short patch fiber "cables", etc?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
It's not practical for me to run fiber all over my house, or at least I don't think it is. Can you buy a reel of OM3 fiber and run it like CAT5/6 and terminate it to a wall plate and then use short patch fiber "cables", etc?

Luckily for me these two servers will sit right next to my NAS for now so I can run twinax or OM2/OM3 cabling at short distances.

I think you can buy reels of OM2/OM3 but I don't quite know the cost of the equipment to terminate and test the connection ends.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
dd, why are you looking at UnRAID as opposed to Windows Server or ESXi, two products for which you have vastly more experience? What's it giving you? I briefly considered an UnRAID system last time I changed my storage server but in the end I rejected it because of its limited support for addressable storage devices.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Luckily for me these two servers will sit right next to my NAS for now so I can run twinax or OM2/OM3 cabling at short distances.

I think you can buy reels of OM2/OM3 but I don't quite know the cost of the equipment to terminate and test the connection ends.
I think it has to be terminated at the SFP+ module though. AFAIK, you can't terminate it at a wall plate and use a short patch "cable". That sort of rules all this out. My main server and backup box will be sitting next to or perhaps on top of each other and will have twinax DACs connecting them.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I think it has to be terminated at the SFP+ module though. AFAIK, you can't terminate it at a wall plate and use a short patch "cable".
You can use a coupler like this one for the cable inside your wall :
https://www.tripplite.com/duplex-multimode-fiber-optic-coupler-lc-lc~N455000PM/

And then plug another fiber patch cord to link your equipment on the other side.

Fiber cables aren't nearly as robust as copper wires though. They are not design to be inserted and removed on a daily basis.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
I think it has to be terminated at the SFP+ module though. AFAIK, you can't terminate it at a wall plate and use a short patch "cable". That sort of rules all this out. My main server and backup box will be sitting next to or perhaps on top of each other and will have twinax DACs connecting them.

I think you mean LC/LC connector. The only cables I've seen mated to an SFP+ are DAC twinax cables. There may be other exceptions but typically the SFP+ transceiver is designed to work with the specific make/model of HBA (e.g. Intel). All of the OM2 and OM3 fiber I've worked with (even as recently as this morning) have some form of this LC connector on their end.

lclc-111xx_01.jpg
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
You don't want to terminate your own fiber. The equipment to polish and attach the ends is expensive and then you need to qualify it. On the other hand, short pieces come in standard sizes and low voltage conduit is not expensive either to keep out the critters. The coupler Coug mentioned would work at the wall but I would rather choose a keystone.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
dd, why are you looking at UnRAID as opposed to Windows Server or ESXi, two products for which you have vastly more experience? What's it giving you? I briefly considered an UnRAID system last time I changed my storage server but in the end I rejected it because of its limited support for addressable storage devices.

I want to replace all but one of the other computers in my house with locally connected virtual machines (as shown in the videos above). Besides by main workstation, there are 6 other computers in the house constantly running. Replacing 5 of those with one (even if it is a dual-Xeon) is going to save electricity and make support easier. I also like the idea of being able to route that pool of computing power around (if no one else is home I can run it as a render farm).
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So, I'm pretty sure something is missing here.

E5-2689v1_CPU.jpg

An my first v1 E5-26xx build grinds to a halt. Oh, I have all the pieces, just some of them aren't usable. Minor detail...
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
Haven't used eBay in ages. Every once in a while I'm tempted, then someone posts something like this. Occasionally I'll take my chances with with someone who isn't at least "fulfilled by Amazon" on their platform, but that is as risky as I'll get.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
Haven't used eBay in ages. Every once in a while I'm tempted, then someone posts something like this. Occasionally I'll take my chances with with someone who isn't at least "fulfilled by Amazon" on their platform, but that is as risky as I'll get.

I haven't used it since ~2007 when I bought something silly but I placed an order Monday morning for two Intel server cases, two air ducts, four heat sinks, two IO shield plates, and two intel RMM4 remote management ports. They all got delivered this morning with no issues and all where NIB as-listed on eBay. They even combined the shipping rather than charging me separate power item.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Haven't used eBay in ages. Every once in a while I'm tempted, then someone posts something like this. Occasionally I'll take my chances with with someone who isn't at least "fulfilled by Amazon" on their platform, but that is as risky as I'll get.
All of eBay's protections now side heavily with the buyer, it's hard to get burned as a buyer. The seller is sending me out an intact replacement CPU today. I've got a damaged or incorrect Blu-ray here or there, like movie that wasn't actually the 3D version that was listed. The sellers go out of their way to avoid negative feedback. The return shipping label is pre-paid. Ultimately, if the seller doesn't accept returns and you get a damaged / broken / incorrect item PayPal will refund your money.

As far as selling something on eBay, not a chance. The system is so stacked in favor of the buyer it's pretty easy to get burned by a scamming buyer.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I haven't used it since ~2007 when I bought something silly but I placed an order Monday morning for two Intel server cases, two air ducts, four heat sinks, two IO shield plates, and two intel RMM4 remote management ports. They all got delivered this morning with no issues and all where NIB as-listed on eBay. They even combined the shipping rather than charging me separate power item.
My stuff from the same seller also came yesterday. I didn't open any of the boxes yet to open them since I was focused on building up the v1 E5-26xx system.

On that note, the Fractal Designs Define R5 case is pretty decent. I'm not a huge fan of the fan mounting, no pun intended. You can't use silicon isolation post mounts like the ones Noctua provides. You have to screw the fans in and the screws they provide for the front fans are a little on the short side. You can barely get them to start in the metal threads in the case with the 140mm Noctua fans. I'm not sure if the thickness of the silicon pads on the corners is the problem or not. Speaking of the 140mm Noctua fans they have like 6-7" long cables on them. They include an extension cable with the fans, but having such a short cable seemed odd to me.

Before I get off the topic of fans, why does a case in 2016 come with 3 pin non-PWM fans? Reportedly the included Fractal Design 140mm fans are pretty decent in terms of airflow vs. noise and what not, but I'm not interested in their archaic manual 3 speed selector switch tucked behind the front door. I want 4 pin PWM controlled fans that can be dynamically controlled by the motherboard based on thermals in the case.

I'm not entirely sure who's at fault, but the X9SRA motherboard has oddly placed mounting holes, at least relative to where you can put standoffs in the R5. There is no spot for a standoff under one of the motherboard's holes at the top. The middlemost ATX standoff location at the top in the case doesn't line up with the hole in the motherboard. See the red box in the picture below.

X9SRA.jpg

So, I basically ignored that hole. Also, the centermost mounting hole in the X9SRA also doesn't seem to line up with a standard ATX location. It lines up with a standoff location marked mATX, so I was okay there. Once the X9SRA motherboard was mounted in the case the SSDs on the back of the motherboard tray end up very close to the 6Gbps SATA connectors on the motherboard. I ordered a few 8" SATA cables on eBay for connecting to SSDs mounted on the back since I don't want to have gobs of extra cable to bunch up somewhere. I also ordered some 18" SATA cables for the HDDs since I didn't have any extra 18" ones with latches floating around. Most of mine seem to be 24" which again would leave me with excess cable.

On the power supply EVGA was nice enough to use the same 6 pin connector as most other modular power supplies I've used for SATA and their Molex cables, but they changed the pinout around, so you can't use cables from other modular power supplies. Antec and Seasonic cables will work with each other, but they are not compatible with the EVGA ones. It looks like you're supposed to be able to mount the power supply in the R5 either fan up, or fan down, but the screw holes for the PS don't quite line up with the holes in the case if you try fan up. I installed it fan down, but I'm not sure how well it will be able to pull air since the fan is so close to the bottom. Admittedly the bottom has grills and a mesh filter so air can be pulled in through the bottom, but you can't put the case on carpet without blocking all of the openings in the bottom. I will have to use small blocks of wood to keep it off the carpet.

That's about as far as I got since it turned out I didn't really have a CPU for the board. I have RAM, but didn't bother installing it yet. I should have a E5-2670 v1 CPU I can put in the board tomorrow.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I got my stuff from Natex today. I decided to get my workstation with the X9SRA board working tonight. As part of that I decided to test the eight 8gB PC3-12800R ECC DIMMs I bought from eBay to make sure all eight slots in the Supermicro board work, etc, etc even though I'm only planning to put 32gB of RAM with four DIMMs in the system long term. Of course one of the eight DIMMs is bad. The board won't boot/POST with it installed. The DIMM is warm when I pop it out. The other DIMMs aren't warm. I tried the troublesome DIMM in a few different slots with the same results. I put one of the other DIMMs I have (I have another 4 very similar DIMMs) in to complete the 64gB and the board boots/POSTs fine. I've contacted the eBay seller. I'm hoping he has an extra DIMM he can send me.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
A few more things.

1) No one warned me about Supermicro's stupidity when it comes to PWM fans. Apparently they have a hard / non-adjustable coded minimum RPM threshold written into the BIOS. This causes the fans to pump or oscillate their RPM. Every time the fans drop below the threshold they're turned on full, then temperature control takes over again and they begin to slow until the cycle repeats. I noticed this yesterday. I changed the headers the fans were plugged into and the behavior stopped so I didn't think to much more of it. Then this morning after I swapped in the RAM I am intending to run in this system which didn't work (see below). The fan pumping behavior returned. I haven't found a way to stop it. I saw some talk about using ipmitool, but that won't work on this board since it lacks a BMC.

2) Of the 4 matching Micron 8gB PC3-12800R ECC DIMMs I bought from another seller on eBay, one of them doesn't work. The system won't boot with it in any of the slots, even if it's the only DIMM. This one doesn't get warm though. The other 3 DIMMs will work by themselves, or with a 4th DIMM from the other good set of seven. :cursin:
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So my Thermalright True Spirit 140 BW Rev. A wasn't flat on the bottom. It would noticeably rock on both my E5-2670 and the replacement E5-2689. When I removed it from the E5-2670 I could tell from the distribution of the thermal compound it wasn't in tight contact with the entire cpu. So, I decided to fix it. I wet sanded the bottom with 320, 600, and then 1000 grit. Here it is after the 320 grit sanding:

lapped TS-140-BW-Rev A.jpg

From the amount and locations of the exposed copper you can see where the high spots were. I had to sand that far into it in order to get sandpaper contact with the entire bottom.

When I was done it didn't rock on the heatspreader of the CPU, though I don't think the heatspreader on the CPU is entirely flat either. I decided to refrain from lapping the CPU since I'm not sure how the clamping force of the socket might deform it, etc.

Next up, the dual E5-2670 motherboard and Intel parts...
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I'm pretty sure my neighbors will know when I turn on this Intel dual processor chassis. Holy crap is it loud firing up. It was stuck at full power for a while while I floundered trying to make sense of a post on STH on updating the various components. After figuring out the post linked to the wrong update package and getting the BMC, BIOS, ME, and FRU/SDR updated it is no longer running everything at max power all the time. Unfortunately, the 120mm fans in it have a definite whine to them even when they're running slowly. I set the fan profile to acoustic and set it for 0-300meters above seal level, so should be as quiet as it can be. IMHO, you would not want to try to make a desktop out of this. And the noise it makes when you first turn it on as it ramps all the fans to max power, including the little 40mm special in the power supply that I'm pretty sure could double as a router... Oh my!!! Thankfully it doesn't last more than a few seconds before the fans ramp down.

The chassis and the way it all goes together is pretty slick since the motherboard and chassis were made for each other. As you put it together you can't help but give Intel some mental kudos as you see how cleverly they handled some things.
 
Top