Video Cards

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,263
Location
Gold Coast Hinterland, Australia
Sheesh. It's just model numbers. What matters are the performance, power, and cost.
So, when I look at a laptop with an RTX3060 GPU in it, can you tell just by the model number which one is actually included?
People who read reviews, look at benchmarks will know there are potentially 3+ variants, all called the 'RTX3060' and all vary by speed, wattage, memory bus and included memory...

Same deal with the RTX4080, if the only public shown difference is the RAM size, the public perspective that will be the only difference. However, learned folks will know there is a big difference between the 12GB and 16GB models, including speed, wattage, memory bus width, CUDA/RT core count, etc.

All this allows is for SI's (System Integrator) like HP, Dell, Lenovo, etc to sell machines with a 'RTX4080' at a premium whilst providing the not-so premium version included, and they haven't violated any advertising laws at all, but are taking advantage of lack of knowledge on the consumers part to make that sweet, sweet profit.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
You guys know all that stuff. Most people don't know. If they care they will seek the info. I would look for test results, such as benchmarks.

But Joe Blow at the Walmart has no clue between different generations and grades of Intel CPUs or NVidia GPUs.
Another time I was in the northern Arctic Ocean and some dude was asking about my ultralight laptop. I explained make and model of CPU, but he kept asking if it was i7 vs. i5 as if that was all that mattered.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
The biggest place for improvement in GPU computing is as far as I can tell from video editing packages. I have Magix Vegas 19 and Resolve Studio that both work a lot faster at almost everything when there's a discrete GPU on hand, with the amount of "fast" definitely tied to the capability of the GPU available. As far as I can tell, both Capture One and the Adobe packages (PS and Lightroom Classic) are almost entirely CPU bound. I don't have a way to do an Apples-to-Apples test at the moment; the big PC with Adobe stuff on it has an AMD GPU and about half the processor of my workstation.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
IIRC Adobe products use the built in encoding engines (QuickSync on Intel, NVENC on nVidia) and on nVidia uses CUDA for acceleration for other operations...

So depending on how well the CUDA code Adobe has written, you could see linear improvements based on the linear increase in CUDA core count, or it may peak at a number of CUDA cores and then see no improvement past a certain CUDA core count. If the former, then look for a card with the most CUDA cores, if the latter, find what the limit is and look for the card with the closest to said count.

Only a good set on independent benchmarks that include your same workflow will be able to give you the answers you're after.
I'm pretty sure nobody does it like I do. :LOL: Canon does not accelerate, but rumours persist it will eventually. GUI does fairly well with NVidias. DXO does something, but could be faster.
Most likely I would keep the 3060Ti in the currently main computer, which would then move to the backup role with the new build replacing the guts of it. The Quattro P2000 is getting old.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
It's good to have more options. Everyone doesn't need the next gen of expensive flamethrowers.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
I would not go larger than 2.7 slots since I need two 8/16x PCIe slots.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
There have been a bunch of overviews of the 4090 and it seems that the claims of performance roughly doubling over the 3090 are more or less true, but I haven't found a single discussion about them as general purpose computing devices. AV1 encoding is on my radar as an important feature to consider for this hardware generation, but right now no one is talking about anything but game frame rates.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,548
Location
USA
This may not be in your normal content path, but LTT did kind of cover the issue with getting decent AV1 hardware encoding without needing to spend a small fortune on a 4000 series.

The tl;dr suggestion was to use an inexpensive Intel ARC A380 as a secondary GPU for AV1 encodes.

 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
One of the things that makes the 4090 actually interesting in this regard is that it has dual hardware encoders. I'm not sure if they work in parallel and that's something I'd like to know. I will say that NVenc in my experience beats all comers as far as hardware video encoding on Windows, as long as your output format is something it supports.

LM, depending on what hardware you need to connect, remember that you can get PCIe extender cables if you need to run extra hardware.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
What is the GPU encoding, crypto stuff? I thought it was playing video games or something.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
What is the GPU encoding, crypto stuff? I thought it was playing video games or something.

There can be big improvements for video editing that come from having direct hardware codecs for particular file types. This is why ProRes 422HQ is such a big thing for Macs as an intermediate editing format. It's much smaller than raw video formats, preserves 10 bits of color and it's fast and easy to work with. Getting hardware support for AV1 will hopefully bring Windows closer to parity in that respect. Or maybe it'll only be good as an output format. Right now, if I make something AV1, I have to use my CPU, which is a massive pain point, and I don't really wanna bother with it.

In a video editor, it's especially a big deal because for high res video, it's often a lot easier to build a proxy file and work with that, then apply the edits done to the proxy to the source video. There should be a huge workflow improvement in the newest GPUs and the latest software.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,263
Location
Gold Coast Hinterland, Australia
There have been a bunch of overviews of the 4090 and it seems that the claims of performance roughly doubling over the 3090 are more or less true, but I haven't found a single discussion about them as general purpose computing devices. AV1 encoding is on my radar as an important feature to consider for this hardware generation, but right now no one is talking about anything but game frame rates.
The only one I've come across is https://www.youtube.com/c/EposVox/videos
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
Puget Systems finally put up some analysis of the 4090 for content creation workloads. Here's the executive summary:

Overall, the new NVIDIA GeForce RTX 4090 24GB GPU represents a massive leap in GPU performance. The exact amount depends highly on the application, with the greater benefit of course being found when the GPU is the primary bottleneck to performance.

For video editing, the RTX 4090 can be as much as 40% faster than the previous generation RTX 3090 and 3090 Ti, or almost 2x faster than the older RTX 2080 Ti. The RTX 40 Series also brings about a small performance boost for those using the GPU for either hardware decoding or encoding of H.264 and HEVC media.

Unreal Engine sees an even greater performance gain, with the RTX 4090 giving us roughly an 85% increase in FPS over the RTX 3090 and 3090 Ti across all our tests. Depending on the exact use case (ArchViz, Virtual Production, etc.), that means either faster renders, smoother performance, or the capacity for increased detail.

Lastly, GPU rendering is really where you are going to get the most out of a more powerful GPU, and the RTX 4090 comes through in spades. GPU Rendering is often nearly twice as fast as the previous generation RTX 3090 or 3090 Ti, or four times faster than the older RTX 2080 Ti.

Unfortunately, it doesn't look to me like dual hardware encoders are THAT awesome for video encoding, or Resolve isn't set up to use more than one, because the gain there is only about 10% over a 2080TI.

And even more annoying, nVidia didn't bother to support HEVC 10bit 422 in 4000-series hardware, which is the thing I was really hoping to see. I've only just found confirmation of that this morning. Intel Arc DOES support it, but apparently it works much, much better on an Intel Platform than on AMD.
 
Last edited:

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,333
Location
Eglin AFB Area
Discovered by accident that nVidia artificially limits Integer Scaling to the RTX 20 series and newer, including the GTX 16 series. I'm beyond nuclear pissed about that since my R9 380 could do that.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,788
Location
I am omnipresent
Website
s-laker.org
AMD lifted the curtain on RX7000 hardware and the 7900 is apparently more than twice as fast in compute terms than the 6900 and has a downright reasonable 300W TDP and $1000 MSRP.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,333
Location
Eglin AFB Area
Looks like I wasn't super far off the mark, then.

With all this stuff nVidia's pulling recently, some rumblings about bad drivers for Turing and older cards, and my still being salty about the lack of integer scaling support on my 1070, I find my eyes wandering a bit. The 1070 wouldn't go to waste, exactly -- I could use it as an encoding card in a new NAS build for Plex, and/or fold full-time on it. I got to about a million points and then put it down for energy cost concerns but being totally honest a 150W card like the 1070 running 24/7 won't impact my bill that much.

With prices dropping on the 6000 series day by day, and emboldened by my friends success stories this time around, I may well wait for a 6700XT from a decent AIB to hit 280ish new and then buy one of those. It's a nice upgrade, it'll actually support integer scaling (this is actually important to me given how many of the games I play are older titles running upscaled) and it won't require me to rush out and buy a new power supply to adequately juice it, since it's only 80W more than my current card. My 380 was of similar wattage and I ran it with a much higher wattage CPU, so I know my 12V rails can take it.

The RT performance is behind the 30 series. That's fine, at the prices they're set to be going for soon once AIBs are trying to clear stock -- AMD doesn't seem to be going the route nVidia is where they set the prices of the new stuff so ridiculously high as to be unobtainium while they wait for 30 series stock to dwindle. I've also yet to see more than one or two really compelling uses of RT outside demos that are made to make the hardware look good anyway.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,677
Location
USA
Are the AMD GPUs supported by all the image processing software now?
My understanding is that they were better accelerated by NVidia.
I care not one iota for the slaughter video games, money laundering cryptotics, etc.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,333
Location
Eglin AFB Area
You in particular are probably better off with an nVidia card. Everything down that aisle pretty much exclusively uses CUDA, which AMD will never support. I find it quite funny actually that the OpenCL standard gained almost zero traction here, but then, FireGL sales had been in decline for years and years at this point.

I don't use my hardware for, ahem, productive purposes strictly, so I'm okay if for example Photoshop or Premiere doesn't support my GPU very well. The only thing that really bites is when OBS can't make use of the hardware encoders, which for a little while, it couldn't.
 
Top