9950X3D/4090 Watercooled Build

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Parts arrived today, so I started the build. So far I'm about 6 hours into assembly.

AMD 9950X3D, delidded by Thermal Grizzly with a 2 year warranty
Thermal Grizzly Direct Die Waterblock
ASRock X870E Taichi Lite DOA, replaced with ASUS PRIME X870-P WIFI
96GB Corsair Vengeance DDR5 6000
1TB Crucial T705 Gen5 SSD
MSI 4090 Suprim with Alphacool Waterblock
Seasonic Prime 1200 PSU
Silverstone RM52 6U Rackmount chassis
2x Alphacool 360 Radiators
2x Alphacool 120 Reservoirs
2x Alphacool D5 clone pumps
7x Noctua NF-A12x25 PWM Fans
Dynamat

It has been a while since I've done an over-the-top build, and I couldn't help myself. The plan is for separate water cooling loops for the CPU and GPU, allowing them to settle at different temps, and make future changes easier. This is probably 3x the cost of a system that would perform within 5%, but it should be very quiet and satisfying. Not going for looks, just performance and cool factor (in my eyes at least).

I picked this case for the front IO and that I could fit the radiators in it. The rest of the features I stripped out as they aren't needed for the build.

Getting vibration treatment on the chassis first was important, as this chassis is just steel and vibrated all over. I added it in chunks to work around the protrusions on the panels until tapping or banging it anywhere just made a "thud". This did involve sticking some panels together, but they aren't ones needed for service.

Unfortunately, Alphacool seems to have forgotten one of the plugs on each of the reservoirs. I've put in a service ticket and hope to have them in soon. In the meantime I may use the parts I have to make the CPU loop work and run the GPU aircooled. Currently pressure testing the CPU loop overnight.
 

Attachments

  • 20250408_210746.jpg
    20250408_210746.jpg
    661 KB · Views: 9
  • 20250408_154235.jpg
    20250408_154235.jpg
    628.5 KB · Views: 9
  • rn_image_picker_lib_temp_2dbb513a-8490-4d24-aaef-3378df6422f1.jpg
    rn_image_picker_lib_temp_2dbb513a-8490-4d24-aaef-3378df6422f1.jpg
    601.9 KB · Views: 9
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Got the build far enough along that I was able to flash the BIOS to the latest and power it on. The POST codes on the motherboard never leave "00", so either the board or chip is bad. There have been rumours of ASRock having issues with AM5, so I've ordered an ASUS PRIME X870-P that should be here tomorrow so I can continue.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,375
Location
Gold Coast Hinterland, Australia
How long did you wait for the POST screen?
IIRC AM5 processors can take a long time to post the first time due to memory training especially when dealing with large memory modules. (10-15 minutes having been reported on reddit with some higher end boards like the Taichi).
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Just in case I gave it another 20 minute shot right now. Even 96GB should be faster than that, and the code of 00 on the motherboard implies that it didn't even get to the memory yet.

Replacement board should arrive today.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,520
Location
I am omnipresent
The more RAM you have and the higher it's clocked, the longer it takes. It EASILY took a half hour before 96GB of DDR5-6400 posted on my system.
Also I wound up running my EXPO RAM at DDR5-6000 anyway because I was seeing a bluescreen every few weeks and clocking the RAM down seemed to fix it.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
The ASUS board posted immediately and took me to BIOS within 5 minutes. I set to EXPOII (DDR5-6000 and tighter timings), and will start stability testing the RAM soon.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Only the CPU water cooling loop is currently running, and I have an old 2080Ti with a broken fan just to give video out.

Factory Clock:
Ambient is 27C
Idle is 32C
Cinebench 2024 CPU Multi-Core (full load) is 56C at 5.2Ghz giving a score of 2340.

Compared to a friend who has a non-delidded 9950X3D and a 360 AiO my full load temp is 15C cooler.

Overclocking may have to wait until I get back from a road trip to Italy.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,520
Location
I am omnipresent
Every AM5 other than the 7500F has at least enough iGPU to give video out, so no reason to involve an iffy discrete card if you don't want to.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Now back from vacation and the rest of the parts have arrived. Still bleeding the loops, and I might do some cable management. For some reason the highest performance water cooling parts have RGB, I'll at least make them white when I can.

1000008801.jpg
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
I totally get that some people care about the looks, I just don't. The thing I care about most is that it is silent unless doing something hard, and then it keeps temps optimal while squeezing the most from the hardware.

Apparently I also don't care about reboots. This is the OS drive from my last desktop.
1745447474117.png
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
Biggest remark at this point is that running all the memory possible at 6000 has been zero effort. Even after upping the fabric clock and taking the ratio to 1:1, still no issues.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
They aren't actually behind. The front 3 are all the way at the bottom of the chassis, and the back 3 are all the way at the top. There is a few cm of overlap, but the chassis itself has an air guide that separates the two.

So far with all the overclock, 5.9Ghz CPU and 3Ghz GPU, and many stress tests running in parallel, the CPU maxes out at 72C and the GPU at 52C. That is while the system draws 800W.
 

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,954
Location
Somewhere in time.
They aren't actually behind. The front 3 are all the way at the bottom of the chassis, and the back 3 are all the way at the top. There is a few cm of overlap, but the chassis itself has an air guide that separates the two.

So far with all the overclock, 5.9Ghz CPU and 3Ghz GPU, and many stress tests running in parallel, the CPU maxes out at 72C and the GPU at 52C. That is while the system draws 800W.
Super!
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,935
Location
Eglin AFB Area
Website
sedrosken.xyz
Those are some eye-watering power bills in your future, but I can't argue with those numbers. I might consume a quarter of the load power you do, but I also probably get about a quarter of the performance. I guess it's a good thing for me, then, that that's enough. :)

It sounds like you're happy with it and that's the most important thing. Maybe it's time to redo your signature? I say that with full knowledge I need to update mine too.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,835
Location
Horsens, Denmark
This is actually nearly 300W lower total power consumption than the rig still in my signature. But that said, games these days let you limit your framerate, and my display is only capable of 120hz, so 120hz(@4k) with VSync in most games I play actually draws a much lower amount of power. 300-500W is probably about right, with Idle down around 100W and a very effective sleep mode.

The performance is excellent, but I mainly just wanted to play with a de-lidded AM5 CPU and custom water-cooling. Now that I don't build rigs for others I have to be my own customer.

I'm planning on updating the signature once I get the server built out, probably another week or so.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,374
DD:
96 gigs of ram enough?
Possible to run a Raid 0 with two SSDs?
Either worth considering? Any speed gain a human would notice?
g
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,520
Location
I am omnipresent
Possible to run a Raid 0 with two SSDs?


RAID0 is most helpful when you are doing large file loads and need increased drive transfer speeds.
This is largely mitigated by NVMe drives, especially PCIe gen 4 or 5 models. Even if you have a PCIe gen 4 drive, you'll almost never see 7GB/sec transfer rates on a regular desktop PC because there's simply nothing you can do that will create a load that large. And if you DO surpass PCIe gen 4, well, you can get a PCIe gen 5 drive and you're still better off than messing with RAID0 since you aren't taking the risk of losing all your data when a drive fails. RAID0 also isn't terribly helpful when your workload is lots of tiny files, where the definition of "tiny" is probably anything less than tens of megabytes per file, which describes most user-created data absent videos. Little files architecturally can't take advantage of high transfer rates because, well, they don't take long enough to transfer.

RAID0 does give you the capacity of all the drives in the array and that is nifty, but even 7.6TB SSDs are relatively affordable now. I actually bought a couple Micron 15.3TB drives a couple weeks ago for around $1100 each. Other than bragging rights, is there a reason you need more NVMe storage than that in one volume?

I can go on to say that there are differences in philosophy between various SSDs, where a lot of drives are designed for very high speed "bursty" performance that are probably typical of desktop user needs. These drives might have relatively small but high speed DRAM or SLC cache and yes, they can hit that 14GB/sec peak for some short amount of time (e.g. 10 seconds) but will more typically operate at a sedate 3GB/sec. You can also buy drives that don't have the insane peak IO per sec rating but can actually sustain 5 or 6GB/sec for hours on end. These will be your Kioxia/Micron/Solidigm enterprise drives, which you can liken to the 10krpm SCSI drives of ages past. Don't get hung up on those, either. You flat-out aren't ever going to need anything like that. I have database servers with 10k active users and developers who never met an INNER JOIN they didn't like and the only time drive IO on any of those systems spikes above 1GB/sec is when the databases get dumped to a backup drive.

There's a third type of drive, where they're just flat-out lying about specs. Try not to buy those. The guys running Storagereview.com are still very helpful with figuring that out.

RAID1, mirroring, is most helpful for continuity of operations. Lose a drive and stay up and running. Cool. Some controllers actually can load balance IO requests to see performance gains for read operations over RAID1, but on desktop PCs, you're very rarely going to have multiple equally high speed interfaces to do that with NVMe; AM5 most often has 1 PCIe 5, 1PCIe 4 and several that use bridge chips to use 4 lanes of PCIe 4 to make two PCIe 3 m.2 slots. It's very nice if you have the hardware to devote to RAID1, but if you have your user data backed up, it takes about 10 minutes to reload Windows or Linux to get back to what you need to be doing. It's not like you're losing thousands of dollars per second of down time like a business might.

In Enterprise-land, a lot of system operators have moved away from hardware RAID. The RAID on your Intel or AMD motherboard is actually offloaded to the CPU, under the very reasonable assumption that there's probably a 4GHz+ core that is perfectly happy to manage your parity calculation needs and that 4GHz core is better than whatever ARM chip they would've stuck on a RAID controller. If you're doing one of the striped parity modes, though, you really, really want to have ECC RAM as well. DDR5 is kinda-sorta ECC, but only per DIMM. Full ECC has error correcting per-DIMM as well as in the memory controller. This is also why ECC RAM is a lot slower than desktop RAM. Anyway, if you're doing RAID5 or 6, or RAIDZ of some flavor, you're most likely doing it in software, and there's still a potential for issues if you haven't paired it with ECC RAM, because there's a non-zero chance you've written bad data through a memory error that won't be repeated when you're trying to rebuild an array.

What I am saying is that you probably don't need to be messing with RAID unless it's just for fun. If you're doing it for fun, go nuts and do whatever you want.
 

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,954
Location
Somewhere in time.
RAID0 is most helpful when you are doing large file loads and need increased drive transfer speeds.
This is largely mitigated by NVMe drives, especially PCIe gen 4 or 5 models. Even if you have a PCIe gen 4 drive, you'll almost never see 7GB/sec transfer rates on a regular desktop PC because there's simply nothing you can do that will create a load that large. And if you DO surpass PCIe gen 4, well, you can get a PCIe gen 5 drive and you're still better off than messing with RAID0 since you aren't taking the risk of losing all your data when a drive fails. RAID0 also isn't terribly helpful when your workload is lots of tiny files, where the definition of "tiny" is probably anything less than tens of megabytes per file, which describes most user-created data absent videos. Little files architecturally can't take advantage of high transfer rates because, well, they don't take long enough to transfer.

RAID0 does give you the capacity of all the drives in the array and that is nifty, but even 7.6TB SSDs are relatively affordable now. I actually bought a couple Micron 15.3TB drives a couple weeks ago for around $1100 each. Other than bragging rights, is there a reason you need more NVMe storage than that in one volume?

I can go on to say that there are differences in philosophy between various SSDs, where a lot of drives are designed for very high speed "bursty" performance that are probably typical of desktop user needs. These drives might have relatively small but high speed DRAM or SLC cache and yes, they can hit that 14GB/sec peak for some short amount of time (e.g. 10 seconds) but will more typically operate at a sedate 3GB/sec. You can also buy drives that don't have the insane peak IO per sec rating but can actually sustain 5 or 6GB/sec for hours on end. These will be your Kioxia/Micron/Solidigm enterprise drives, which you can liken to the 10krpm SCSI drives of ages past. Don't get hung up on those, either. You flat-out aren't ever going to need anything like that. I have database servers with 10k active users and developers who never met an INNER JOIN they didn't like and the only time drive IO on any of those systems spikes above 1GB/sec is when the databases get dumped to a backup drive.

There's a third type of drive, where they're just flat-out lying about specs. Try not to buy those. The guys running Storagereview.com are still very helpful with figuring that out.

RAID1, mirroring, is most helpful for continuity of operations. Lose a drive and stay up and running. Cool. Some controllers actually can load balance IO requests to see performance gains for read operations over RAID1, but on desktop PCs, you're very rarely going to have multiple equally high speed interfaces to do that with NVMe; AM5 most often has 1 PCIe 5, 1PCIe 4 and several that use bridge chips to use 4 lanes of PCIe 4 to make two PCIe 3 m.2 slots. It's very nice if you have the hardware to devote to RAID1, but if you have your user data backed up, it takes about 10 minutes to reload Windows or Linux to get back to what you need to be doing. It's not like you're losing thousands of dollars per second of down time like a business might.

In Enterprise-land, a lot of system operators have moved away from hardware RAID. The RAID on your Intel or AMD motherboard is actually offloaded to the CPU, under the very reasonable assumption that there's probably a 4GHz+ core that is perfectly happy to manage your parity calculation needs and that 4GHz core is better than whatever ARM chip they would've stuck on a RAID controller. If you're doing one of the striped parity modes, though, you really, really want to have ECC RAM as well. DDR5 is kinda-sorta ECC, but only per DIMM. Full ECC has error correcting per-DIMM as well as in the memory controller. This is also why ECC RAM is a lot slower than desktop RAM. Anyway, if you're doing RAID5 or 6, or RAIDZ of some flavor, you're most likely doing it in software, and there's still a potential for issues if you haven't paired it with ECC RAM, because there's a non-zero chance you've written bad data through a memory error that won't be repeated when you're trying to rebuild an array.

What I am saying is that you probably don't need to be messing with RAID unless it's just for fun. If you're doing it for fun, go nuts and do whatever you want.
Yikes! Tons of real world experience speaking! Thanks Merc.
 
Top