Video Cards

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
From these guys: https://www.serversimply.com/blog/amd-and-nvidia-in-the-professional-sphere-a-comparative-analysis

In the specialized domain of data center operations and machine learning, two major contenders stand out: AMD's Radeon Instinct MI200 and Nvidia's A100 or H100, representing the pinnacle of their respective generations. Both come with their own set of strengths and weaknesses, designed with cutting-edge technologies to accelerate a variety of high-performance tasks.

The AMD Radeon Instinct MI200, specifically the MI250 variant, offers remarkable raw computational capabilities. It boasts peak performance metrics of 47.9 TFLOPS for FP64 and FP32 operations, and a staggering 383 TFLOPS for FP16/BF16 operations. This positions it as a powerhouse capable of handling a variety of high-performance tasks.

Nvidia's A100, on the other hand, doesn't match these raw numbers. But, Nvidia has its ace: the specialized Tensor Cores. The Tensor Cores in A100 offer up to 312 TFLOPS, dedicated primarily for accelerating machine learning tasks. This makes A100 akin to a specialized tool honed for specific tasks, delivering them faster and more efficiently.

The MI250 flexes its muscles with 128GB HBM2e memory and an impressive bandwidth of 3.2TB/s. Such specs make it highly attractive for tasks requiring vast data loads. The Nvidia A100, holding 80GB HBM2e with a bandwidth of 1.935TB/s, seems less commanding in this direct comparison, but is still formidable.

In terms of power consumption, Nvidia appears more restrained, boasting a max TDP of 400W as opposed to the 560W of the MI250. This could play a significant role in long-term operational cost savings in data centers. While both manufacturers offer diverse cooling solutions, the choice of form factor might be pivotal depending on the specific server configuration.

Nvidia brings to the table a more mature software ecosystem, supporting CUDA and a variety of libraries. This could be invaluable for entities already invested in the Nvidia platform. AMD is not far behind, however, supporting up to 8 Infinity Fabric links and compatibility with the open-source ROCm platform.

... So yeah, AMD doesn't look so good compared to nVidia by any metric with its datacenter products.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I'm just waiting to see what can replace my 4070 Ti in 2025.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I'm just waiting to see what can replace my 4070 Ti in 2025.

A bunch of usual suspect Twitterers are saying that the Blackwell 5070 will be inflicted with 12GB GDDR7 over a 192 bit bus for an effective 15% gain in memory bandwidth and probably a much higher than $600 launch price. I'm sure you all can join me now in a bout of laughter out loud.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
This will continue to be the new world as there's less and less competition for Nvidia. At least there's no immediate crypto on the horizon sucking up all the inventory (AI aside)
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
A bunch of usual suspect Twitterers are saying that the Blackwell 5070 will be inflicted with 12GB GDDR7 over a 192 bit bus for an effective 15% gain in memory bandwidth and probably a much higher than $600 launch price. I'm sure you all can join me now in a bout of laughter out loud.
Will I get 1|3 faster speeds for the same size|power for example? I only care about size and power. If 5000 series 3.0 slots and 329W is not a third faster it's probably not worth while. Cost is of limited relevance, but a $600 card is probably too weak to waste time on.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
Best performance / power ratio will probably be though what I'm doing; getting the fastest card and then turning down the power consumption. What benchmark would you like to see and I'll run it?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
But the 2022 technology fastest cards need liquid cooling to fit in 3 slots. The 4070Ti was the largest I could fit in early 23. Maybe there is a Super in 24 but that's not much of an upgrade. I'm hoping for a 5070Ti the same size as the 4070Ti but substantially improved Compute performance.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
My instinct is that it'll be around 15% in general, but your use case is unusual enough that I wouldn't be surprised if it was significantly different based on architecture changes. The memory speeds might really help you.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
Leaked pricing suggests 5090 north of $2200, 5080 16GB $1300+ and 5070 12GB at $600 or $700. No big changes for relative rasterization performance (fill rates etc) but much better ray tracing capabilities. Per Moore's Law is dead.

Usually we can expect the 80 series card to catch up to the previous generation's 90 but there's no change in memory capacity or bus widths in the 70 hardware so the differences will come down to getting more or less GPU cores and faster RAM.. The 70 card is usually the part that interests me the most as well, but not for a 12GB model. That's budget model territory.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I expected that cudas each would be more productive and also faster.
Isn't Ai supposedly a huge push for applications even on the client computers, so nVidia would more computing oomph into them?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
nVidia doesn't have to play in that space. It is apparently working with Mediatek on a new consumer ARM SoC, but it doesn't currently make an x86 CPU and thus isn't part of the NPU game right now. Presumably, anyone with a recent RTX is already in the right ballpark anyway.

Part of the deal with NPUs is that they are "free" as part of a CPU's thermal profile, much like integrated graphics cores. Even high end GPUs have a low power secondary chip that can run until there's a reason to wake up the real GPU.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
The 40 series works very well, better than anything not a MAC and in the highest grade even better.
So why would the 50 series not be even better than the 40 series?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
The 40 series works very well, better than anything not a MAC and in the highest grade even better.
So why would the 50 series not be even better than the 40 series?

M silicon usually lines up with xx60 nVidia hardware until you get into the workstation products and workloads where the unified memory makes having huge amounts of RAM worthwhile. It's... Whelming at best for configs with modest amounts of RAM and core counts.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
There must be a significant synergy with the Armed CPU, because some of the image processing results are very impressive in the mid-priced systems (~$5K). I'm sure the gamers don't care for it.

I will await the 50 series with some hopes in x86-64 systems. Can't nVidia make a $1200 card that fits in exactly 3 slots and has really good performance with air cooling? The giant CPUs take away the slots I need and I don't want the noise and risks of liquid cooling.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
There must be a significant synergy with the Armed CPU

ARM has unified memory built in to the SoC that's about four times faster than DDR sitting in a DIMM slot and that bandwidth can be directed to either CPU or GPU cores, since they're on the same physical component. Quad channel DDR can get to similar-ish levels of performance, but I don't think the motivation exists to optimize heavily for that in the same way as the standard target of "whatever the fuck Apple is doing" while of course there's the variability of nVidia/AMD/Intel/Adreno graphics in the world of Windows. I really do think Adobe and Apple probably put an army of developers on the task of making sure Photoshop and Premier are a first class, well optimized experience on Macs.

I don't know how much better M-series is for non-Adobe content creation software. Adobe is kind of all anyone talks bout and to me it's a non-issue since I'm not going to go out of my way to get a Mac as a primary workstation, nor run Adobe Suite for my primary content creation needs.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
My RTX 2080 died. I bought it basically launch day and other than having stupidly loud fans when it was actively running, it was a good guy for the six years I've been using it and it was FAR better behaved than any of the nVidia cards I had before it. It's not my oldest card, just the one that's been in continuous use the longest.

I can't decide whether it deserves a viking funeral or a place of honor on the wall by my workbench.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
I bought a pair of 2080s right at launch, and one of them is still going in my daughter's PC. The other died about 3 years ago. Great cards.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
I gifted my 2080 Super to a friend and as far as I know it's still in operation. Was a good card, just didn't cut it for 4K gaming for me.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
In 2018 I would have still been using the Pascal Quattro. I missed out on the Turins entirely and proceeded to Ampere. I never had a GPU die, but none worked very hard like you guys with heavy gaming videos.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I swapped my Ryzen 5900X, 32GB RAM and motherboard for a Sapphire RX 6900XT.
1. This thing needs its own ZIP code
2. It'll run Resolve really well
3. Normal PSU connectors
4. It is surprisingly quiet.
5. The guy I got it from was only looking for $350. Even given the low regard AMD cards have among gamers, I feel like I got it for a steal. This card is in the same ballpark as a 4070 for 60% of the money.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I do see a difference from the RX6900 and other video card I have: Every once in a while, it'll stop updating video playback if I have the card doing something resource intensive, by which I mean video editing. The audio still happens, but Youtube/VLC/Kodi just gets stuck on a particular frame. It's like the card forgets about the other task. If I close and reopen the tab or application, it works fine. I don't remember the RX 6650 doing that, but I pay a lot more attention to my workstation than I ever did to the computer where I had the old card. Neither the Arc 580/770 nor the 2080 did that.

I haven't tested to see if it happens in games but I do see that it happens in both Resolve Studio and Magix Vegas.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I've definitely put it through its paces with 100% GPU and RAM utilization. Maybe the problem only happens when I'm using hardware encoders? It doesn't seem to happen when I fire up a game though.

I'm willing to chalk it up to something stupid about its software stack. I can swap in my A770 or some kind of Battlemage card if it really bugs me.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Are the Battlemages in production now? I thought they were some kind of vorpware or scam.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,359
Location
Gold Coast Hinterland, Australia
Are the Battlemages in production now? I thought they were some kind of vorpware or scam.
If you believe the rumours, preproduction silicon is available to ODMs with initial drivers being ready. However, based on Intels financial status there are some concerns that it'll be effective dead on arrival, that it's released but with very little to no support as Intel will EOL it just after release.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
They are designated for video cards or for embedding systems like CPUs/GPUs SuC?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
If you believe the rumours, preproduction silicon is available to ODMs with initial drivers being ready. However, based on Intels financial status there are some concerns that it'll be effective dead on arrival, that it's released but with very little to no support as Intel will EOL it just after release.


Given that contemporary Intel iGPUs are made of the same Xe cores as the dGPUs, I think it's safe for this upcoming generation at least. Scuttling Battlemage would also be killing software support for an important part of its CPUs and that does NOT make sense. Intel may be looking to get bought, but I think Congress would have thoughts on that since the likely candidates aren't US based.

I'm expecting to see Battlemage sometime this winter. Arc did launch in Asia first and it's reasonable to think that will continue.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
But if your newly acquired GPU is crapping out now, how can you plan to replace it with the iNtel that does not exist yet?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I don't think it's crapping out. It seems fine in games as far as I can tell. Even if I'm playing Cyberpunk 2077 in 4k (AFAIK, this is still the gold standard for gaming stress testing and in fact it's the only contemporary game I have installed), it's able to keep up at playable framerates. It just seems to have what I assume to be a bug when I'm using it for content creation. Applications aren't crashing, nor are log-able events being generated.

If this card can make it to early next year, I probably will replace it with a guy I like better. The RX 6900 will still be a value add for a gaming machine by then, even with relatively high power requirements.

It does seem that GPU prices are getting much more reasonable right now, especially for AMD/Intel. I know it's because there's a new generation set to release sometime real soon now, but the combination of the holiday season and possibly the last gasp of purchasing before some complete moron with an MBA from what is purportedly an accredited university's horrific misunderstanding of tariffs and international trade takes effect leads me to think it might still be a good time to get on the suddenly affordable GPU train.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
There are a bunch of basically 2022 cards that are not getting any better, so they should become relatively cheaper over time.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
The issue for the moment is that RTX 4080+ stocks have been depleted in normal retail, but very fast (for everything but CUDA and ray tracing) RX 7800+ hardware is readily available and finally in the realm of sane pricing, where I'd argue it should've been all along. An awful lot of gamers are absolutely convinced they need a $850 4070Ti Super, but the 7900XT and XTX can be had for $150 less and had AMD been willing to play ball with pricing before now, it might not be in the precarious position of miniscule market share that it is in at the moment.

CUDA isn't going to be terribly interesting to gamers and I'd argue that the value of ray tracing support is pretty dubious and in spite AMD's poor reputation for software, being able to say "We make the graphics that make consoles go" should really be a better argument than it seems to be among the PC crowd.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
CUDA was not so interesting to most users until all this AI software was introduced. Fortunately APPLE has taken up some of the slack from the disorganized x86 suppliers, but until the NPUs are really strong (and never will equate to a 400W GPU) we still need discrete GPUs for AI like image processing. I'm not interested in gameras at all. And most of the youth seem to have those PlayStations connected to a display.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
Battlemage has had a couple cards leak now and looks like it will release in mid-December. The B580 actually has fewer execution units than the A580, but it's also a 12GB card that reportedly matches the performance for the A770 at lower peak power, and a couple sellers had them listed on Amazon earlier in the week at $250. New A770s are cheaper than that now, but they're also kind of power hogs; some models idle at 40W.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
They work for my needs either way, and I do think that getting ~3060 levels of performance for around $200 has real appeal to a lot of people. "Budget gaming" is a thing that needs to make a comeback in a real way and that's not possible while nVidia and AMD want more for a GPU than a console costs. I'm a little sad that we won't see a B770 or 790 before the new year but I'm optimistic it's still happening.

On the other hand, Intel's CEO Pat Gelsinger resigned this morning and was replaced by a beancounter but also his EVP. This is super, super bad because co-CEOs suggest both a lack of vision by Intel's senior management and, by elevating the finance guy, suggests that they're also planning to sell big chunks in some Jack Welch-ass moves to make the company better on paper without actually doing anything. I'm also not sure who would want anything Intel has left that doesn't play into US trade protection policies. It's already spun off its fabrication and enterprise disk divisions. It still makes servers and networking kit, but it's not exactly dominant in either sector. Selling those guys and then doing stock buybacks with the proceeds are the kind of moves that would bring up the share price in the short term, but anyone familiar with the industry should know those tricks by now.

Netburst happened when an accountant was in charge of Intel.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
Agreed, I don't think I'll be buying an Intel product soon, nor do I see a valid case for them in the datacenter space compared to the new AMD chips.

I will be keeping my stock for a bit longer though ;)
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,303
Location
I am omnipresent
I have decided that my RX 6900XT is too loud and I kind of hate when it spins up for just a minute while I send 50 pictures through Topaz Photo AI and it's briefly the loudest thing in the room. I'm kind of thinking that I could wire tie some 120mm fans to the existing heat sink, but I'm not sure what else I'd have to do to trick the card into thinking its fans are working normally since I'd probably use relatively slow Scythe or Noctua models.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,747
Location
Horsens, Denmark
Do you think it is spinning up the fans as a result of actual temp increase, or preventatively based on load and some built-in table?

If the former, it might be easiest to just add some thermal mass to hold off the ramp? If the latter I'd experiment with just disconnecting the fan and see if it complains. Many don't, and then you could just point a bigger fan at it (better, 3d print a shroud to direct a case fan to it).
 
Top