Stereodude
Not really a
I watched it yesterday and marveled at the stupidity of it. Anyone dropping that much on a system can afford a few hundred dollars extra for a much more proper chassis like a Norco 4224.
Well, I did call it a more proper chassis. That'd be the minimum for such a build IMHO. As you mention Supermicro would be the right choice. My main server is in a Supermicro 4U chassis. My backup server is in a 4U Norco.I don't even think the Norco is appropriate for $10k worth of drives. I would at least consider a decent Supermicro with a quality SAS backplane and expander.
He bought a ridiculous metal "enclosure" in this one.Fair...but then there would be a less click-bait title to drive ad revenue for Linus. I do enjoy when he tries some outlandish stuff but this one wasn't enough on the outlandish side to be that interesting. If he had done a collaboration with some local metal working fabrication company and built some ridiculous enclosure, then I'd be on board with that kind of stupidity.
I saw this the other dayI'll be curious to get your feedback on the Scythe Fuma 2 when you're done your build and test it for a while. This CPU has a decent amount of heat to get ride of and I wanted to keep it reasonably quiet without dealing with AIO water coolers.
In the words of an ex-president, "I feel your pain." How distressing to have it run OK for many hours before flaking out. I assume that you (and snow and handy) are using quality components, and you already checked the RAM "with a variety of memory tests", and presumably "reseated" connections, etc.I got a crash of x265 with a build using a different compiler of the same x265 version... Looks like the system isn't quite stable under 100% load (with no overclocking). I'll admit I didn't stress test the 2nd motherboard with a variety of memory tests like I did the first one. Given that it's not overclocked the XMP profile on the memory is my prime suspect candidate for the problem.
[Wed Mar 18 16:13:30 2020]
FATAL ERROR: Resulting sum was 6.058846950323651e+016, expected: 5.022461721491425e+016
Hardware failure detected, consult stress.txt file.
[Wed Mar 18 17:17:34 2020]
FATAL ERROR: Rounding was 0.5, expected less than 0.4
Hardware failure detected, consult stress.txt file.
I'm not sure where I'd find that information. The manual doesn't have it. The memory QVL list has single and dual rank memory on it shown as being supported in 1's and 2's. The memory I'm using isn't on the list.Sucky, but out interest have you confirmed your DIMM Rank setup?
I've found that some boards will dictate that for all DIMMs to be populated, that only SR (single rank) DIMMs should be used? (Just double check the DIMMs, their rank and what the motherboard says is supported for your CPU). It's a long shot, but you never know.
That link is to the specs of the wrong memory. But, I'm way past that point anyhow. I had them running at 2133 for most of the testing and only with a single stick at the end and the failures persisted. Also, I only have 2 sticks, not 4 sticks.Your DIMMs are dual rank: https://www.pic-upload.de/view-33001480/F4-3200C16D-32GVK.jpg.html
And if you look at ASRock's page for a similar board: https://www.asrock.com/mb/AMD/X570 Taichi/#Specification
Under DIMMs, 4x Dual Rank DIMMs is limited to DDR4-2666... (I don't know why Gigabyte don't publish the same list).
The DIMMs are rated at 1.2V, but will only achieve speed at 1.35V (so check the voltage in the BIOS). So drop the speed to 2666, ensure the voltage is 1.35V and see how you go.
Vcore doesn't change the memory voltage. It changes the CPU core voltage. I don't get the connection you're making.Certainly sounds like a bad memory controller on the CPU, or a bad motherboard (power delivery or DIMM related). Glad to see the Vcore adjustment appears to be working.
No, there's a separate voltage control for the memory controller.Vcore is used by the memory controller in the CPU, does it not?
What do you mean running "full-time"? It was sitting open in the bottom corner of the desktop from about the time I booted up the computer. Typically closing it and re-opening it is possible, but it doesn't update. This morning it wouldn't close (it hung not responding). I've never seen this sort of behavior on another Windows 10 system. Even ones that have been up for weeks under a heavy load.Aw, κραπ, first the hardware problems, and now Windows OS problems. Did you have task manager running "full-time"?
I don't think that should matter. It sits open for weeks on my Xeon based system with no similar issue.Yeah, that's what I meant. I usually start it only when I want to do some "spot-checking" (with <Ctrl><Shift><Esc>), then exit it again (<Esc>).