Video Cards

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,296
Location
Gold Coast Hinterland, Australia
Sheesh. It's just model numbers. What matters are the performance, power, and cost.
So, when I look at a laptop with an RTX3060 GPU in it, can you tell just by the model number which one is actually included?
People who read reviews, look at benchmarks will know there are potentially 3+ variants, all called the 'RTX3060' and all vary by speed, wattage, memory bus and included memory...

Same deal with the RTX4080, if the only public shown difference is the RAM size, the public perspective that will be the only difference. However, learned folks will know there is a big difference between the 12GB and 16GB models, including speed, wattage, memory bus width, CUDA/RT core count, etc.

All this allows is for SI's (System Integrator) like HP, Dell, Lenovo, etc to sell machines with a 'RTX4080' at a premium whilst providing the not-so premium version included, and they haven't violated any advertising laws at all, but are taking advantage of lack of knowledge on the consumers part to make that sweet, sweet profit.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
You guys know all that stuff. Most people don't know. If they care they will seek the info. I would look for test results, such as benchmarks.

But Joe Blow at the Walmart has no clue between different generations and grades of Intel CPUs or NVidia GPUs.
Another time I was in the northern Arctic Ocean and some dude was asking about my ultralight laptop. I explained make and model of CPU, but he kept asking if it was i7 vs. i5 as if that was all that mattered.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
The biggest place for improvement in GPU computing is as far as I can tell from video editing packages. I have Magix Vegas 19 and Resolve Studio that both work a lot faster at almost everything when there's a discrete GPU on hand, with the amount of "fast" definitely tied to the capability of the GPU available. As far as I can tell, both Capture One and the Adobe packages (PS and Lightroom Classic) are almost entirely CPU bound. I don't have a way to do an Apples-to-Apples test at the moment; the big PC with Adobe stuff on it has an AMD GPU and about half the processor of my workstation.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
IIRC Adobe products use the built in encoding engines (QuickSync on Intel, NVENC on nVidia) and on nVidia uses CUDA for acceleration for other operations...

So depending on how well the CUDA code Adobe has written, you could see linear improvements based on the linear increase in CUDA core count, or it may peak at a number of CUDA cores and then see no improvement past a certain CUDA core count. If the former, then look for a card with the most CUDA cores, if the latter, find what the limit is and look for the card with the closest to said count.

Only a good set on independent benchmarks that include your same workflow will be able to give you the answers you're after.
I'm pretty sure nobody does it like I do. :LOL: Canon does not accelerate, but rumours persist it will eventually. GUI does fairly well with NVidias. DXO does something, but could be faster.
Most likely I would keep the 3060Ti in the currently main computer, which would then move to the backup role with the new build replacing the guts of it. The Quattro P2000 is getting old.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
It's good to have more options. Everyone doesn't need the next gen of expensive flamethrowers.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
I would not go larger than 2.7 slots since I need two 8/16x PCIe slots.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
There have been a bunch of overviews of the 4090 and it seems that the claims of performance roughly doubling over the 3090 are more or less true, but I haven't found a single discussion about them as general purpose computing devices. AV1 encoding is on my radar as an important feature to consider for this hardware generation, but right now no one is talking about anything but game frame rates.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,632
Location
USA
This may not be in your normal content path, but LTT did kind of cover the issue with getting decent AV1 hardware encoding without needing to spend a small fortune on a 4000 series.

The tl;dr suggestion was to use an inexpensive Intel ARC A380 as a secondary GPU for AV1 encodes.

 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
One of the things that makes the 4090 actually interesting in this regard is that it has dual hardware encoders. I'm not sure if they work in parallel and that's something I'd like to know. I will say that NVenc in my experience beats all comers as far as hardware video encoding on Windows, as long as your output format is something it supports.

LM, depending on what hardware you need to connect, remember that you can get PCIe extender cables if you need to run extra hardware.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
What is the GPU encoding, crypto stuff? I thought it was playing video games or something.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
What is the GPU encoding, crypto stuff? I thought it was playing video games or something.

There can be big improvements for video editing that come from having direct hardware codecs for particular file types. This is why ProRes 422HQ is such a big thing for Macs as an intermediate editing format. It's much smaller than raw video formats, preserves 10 bits of color and it's fast and easy to work with. Getting hardware support for AV1 will hopefully bring Windows closer to parity in that respect. Or maybe it'll only be good as an output format. Right now, if I make something AV1, I have to use my CPU, which is a massive pain point, and I don't really wanna bother with it.

In a video editor, it's especially a big deal because for high res video, it's often a lot easier to build a proxy file and work with that, then apply the edits done to the proxy to the source video. There should be a huge workflow improvement in the newest GPUs and the latest software.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,296
Location
Gold Coast Hinterland, Australia
There have been a bunch of overviews of the 4090 and it seems that the claims of performance roughly doubling over the 3090 are more or less true, but I haven't found a single discussion about them as general purpose computing devices. AV1 encoding is on my radar as an important feature to consider for this hardware generation, but right now no one is talking about anything but game frame rates.
The only one I've come across is https://www.youtube.com/c/EposVox/videos
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
Puget Systems finally put up some analysis of the 4090 for content creation workloads. Here's the executive summary:

Overall, the new NVIDIA GeForce RTX 4090 24GB GPU represents a massive leap in GPU performance. The exact amount depends highly on the application, with the greater benefit of course being found when the GPU is the primary bottleneck to performance.

For video editing, the RTX 4090 can be as much as 40% faster than the previous generation RTX 3090 and 3090 Ti, or almost 2x faster than the older RTX 2080 Ti. The RTX 40 Series also brings about a small performance boost for those using the GPU for either hardware decoding or encoding of H.264 and HEVC media.

Unreal Engine sees an even greater performance gain, with the RTX 4090 giving us roughly an 85% increase in FPS over the RTX 3090 and 3090 Ti across all our tests. Depending on the exact use case (ArchViz, Virtual Production, etc.), that means either faster renders, smoother performance, or the capacity for increased detail.

Lastly, GPU rendering is really where you are going to get the most out of a more powerful GPU, and the RTX 4090 comes through in spades. GPU Rendering is often nearly twice as fast as the previous generation RTX 3090 or 3090 Ti, or four times faster than the older RTX 2080 Ti.

Unfortunately, it doesn't look to me like dual hardware encoders are THAT awesome for video encoding, or Resolve isn't set up to use more than one, because the gain there is only about 10% over a 2080TI.

And even more annoying, nVidia didn't bother to support HEVC 10bit 422 in 4000-series hardware, which is the thing I was really hoping to see. I've only just found confirmation of that this morning. Intel Arc DOES support it, but apparently it works much, much better on an Intel Platform than on AMD.
 
Last edited:

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
Discovered by accident that nVidia artificially limits Integer Scaling to the RTX 20 series and newer, including the GTX 16 series. I'm beyond nuclear pissed about that since my R9 380 could do that.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
AMD lifted the curtain on RX7000 hardware and the 7900 is apparently more than twice as fast in compute terms than the 6900 and has a downright reasonable 300W TDP and $1000 MSRP.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
Looks like I wasn't super far off the mark, then.

With all this stuff nVidia's pulling recently, some rumblings about bad drivers for Turing and older cards, and my still being salty about the lack of integer scaling support on my 1070, I find my eyes wandering a bit. The 1070 wouldn't go to waste, exactly -- I could use it as an encoding card in a new NAS build for Plex, and/or fold full-time on it. I got to about a million points and then put it down for energy cost concerns but being totally honest a 150W card like the 1070 running 24/7 won't impact my bill that much.

With prices dropping on the 6000 series day by day, and emboldened by my friends success stories this time around, I may well wait for a 6700XT from a decent AIB to hit 280ish new and then buy one of those. It's a nice upgrade, it'll actually support integer scaling (this is actually important to me given how many of the games I play are older titles running upscaled) and it won't require me to rush out and buy a new power supply to adequately juice it, since it's only 80W more than my current card. My 380 was of similar wattage and I ran it with a much higher wattage CPU, so I know my 12V rails can take it.

The RT performance is behind the 30 series. That's fine, at the prices they're set to be going for soon once AIBs are trying to clear stock -- AMD doesn't seem to be going the route nVidia is where they set the prices of the new stuff so ridiculously high as to be unobtainium while they wait for 30 series stock to dwindle. I've also yet to see more than one or two really compelling uses of RT outside demos that are made to make the hardware look good anyway.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
Are the AMD GPUs supported by all the image processing software now?
My understanding is that they were better accelerated by NVidia.
I care not one iota for the slaughter video games, money laundering cryptotics, etc.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
You in particular are probably better off with an nVidia card. Everything down that aisle pretty much exclusively uses CUDA, which AMD will never support. I find it quite funny actually that the OpenCL standard gained almost zero traction here, but then, FireGL sales had been in decline for years and years at this point.

I don't use my hardware for, ahem, productive purposes strictly, so I'm okay if for example Photoshop or Premiere doesn't support my GPU very well. The only thing that really bites is when OBS can't make use of the hardware encoders, which for a little while, it couldn't.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
Apparently, if you're doing video encoding work and you have an Intel CPU with graphics and one or more Arc GPUs, Intel's architecture allows all of the above to be used at the same time.

Arc GPUs have apparently had some pretty serious driver improvements since launch. I believe they still lag for older gaming APIs compared to nVidia and AMD, but Intel is claiming 80% improvements vs the release driver.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
Are they as good or better than the RTX 3000 series, more like the 4000 series?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
So I should still target the 4070 in Q1 or whenever they are sold? It appears the 4070 Ti will be out first, but 285W and probably won't fit.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
Apparently the Compute/$ ratio on the 4070s is at least in the realm of sanity, but I still don't see myself getting an $800 video card that has no idea what to do with the native video output from my cameras.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
How is the camera relevant to the computer-monitor resolution? Even what are today low to medium resolution sensors like the 24MP D3X 14 years ago far exceed 4K or 5K displays. You need 8K for that and then higher for the 50+MP bodies.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
Native encoding and decoding of video has become the driver of my upgrade choices. Particularly in knowing that "none of the above" is the strongest contender. There are a couple Quadro RTX cards with support for h.265 422, but they also cost as much as a small car.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,074
Location
USA
That is not surprising. Why not buy a good 4K camera and stop messing with the goofy Canon MILS and oddball resolutions? Very few customers will pay for high quality 8K.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
Welp. The scandal surrounding cooler and driver issues (albeit the latter aren't relevant to me since I don't do VR) on the 7000 series kind of dashed any hopes I have of getting last-gen parts at a decent price any time soon. Intel's still having some issues with older DirectX versions on their Arc GPUs, I think, but I have entire machines just for running old games now, and that situation is improving all the time. Plus, I'm technical enough to figure out how to use DXVK if I need to. I'm very tempted by the encode block and the pricing is very fair. And since they're having the aforementioned issues right now stock isn't super difficult to come by.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,132
Location
I am omnipresent
Website
s-laker.org
That is not surprising. Why not buy a good 4K camera and stop messing with the goofy Canon MILS and oddball resolutions? Very few customers will pay for high quality 8K.

Canon, Sony and Panasonic all use 422 Luminance and Chrominance in their video raw and standard output formats in their mirrorless cameras. It's not a resolution issue but a color space issue Keeping 422 rather than switching to 420 means that I'm better able to do color grading, which is incredibly useful for getting output from one camera to match a different one, since the color matching between my R6 and R7 isn't perfect either. I'm not aware of any camera at sub-Arri (i.e. full on movie studio camera) level that does 444 and something that natively outputs 420 is already a downgrade. Even most dedicated video cameras, C70 or Z150 et al, also output HEVC or LOG 422.

DaVinci Resolve says it supports HEVC 422 on anything that does Intel QPI but I have yet to see anyone actually confirm that, and the documents I can find on Intel's web site about Arc's hardware encoders aren't detailed enough to tell me what it can or can't do beyond simple codec support or whether they officially count as QPI.

I am tempted to stick an Arc a380 in my workstation just to see what it does.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
When I saw an A770LE on Newegg going for MSRP with a good software bundle (of particular note are Topaz Gigapixel and PowerDirector as they're supposedly perpetual licenses, even if for one particular version) I pounced. I initially thought I might sell the 1070 to recoup some costs, but I actually don't need to -- when it's completely idle, it uses like, 5w at most, and I can turn it off completely if I need to apparently. I can finally go back to dailying Linux, with a Windows VM with GPU passthrough for the few things I actually really need Windows for (some games and software I own refuses to work in WINE/Proton for love or money) on my personal machine. The Arc performance woes in DX9 and 11 under Windows are moot when everything that isn't OpenGL or Vulkan has to be translated anyway, and I don't have to rely on Intel to continue supporting my GPU when they decide that it's not worth remaining in the industry anymore.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,455
Location
Eglin AFB Area
Predictably, Newegg shipping is honking awful, it was supposed to arrive yesterday and it only just got handed off to the courier service last night. UPS tracking pushed delivery out another 6 days. I'll be lucky if I receive it then I think. Used to be if they missed a deadline they'd upgrade the package to next day air, when I worked for them half the mail I was loading was 3 and 2 day packages that they dallied around for too long with -- apparently either they don't do that anymore, or Newegg paid for some awfully bargain-basement shipping deals.

I have done a bit more planning of what my Linux system will look like -- usually I don't get this far, I play it far too much by ear. Right now the only feasible distro is going to be Artix for me, since I don't like systemd but still need a modern kernel. In fact, I need kernel 6.2 as soon as it hits the repos since it finally marks "DG2" as "officially supported" instead of experimental. With an experimental driver, you have to specify kernel parameters to get modesetting and other such necessary bits.

I'm planning to allocate the entire 256GB NVME SSD to /boot, swap and /, with the first 1500GB of the SATA SSD serving as /home so I have local space to install Steam games to. I was initially wanting to mount /home as an NFS share from my NAS, but I've convinced myself that it'd be better to confine those to the actual data directories of my user since Steam installs games to ~/.local/share ... The last 500GB of the SATA SSD will be earmarked for the Windows VM.

Thankfully DDR4 RAM is cheap now. I had the epiphany after I bought the GPU that if I was going to do a VM setup, I'd better have plenty of RAM to give to it for games, especially since I usually sit at around 8GB used just with my normal multitasking. Jury's still out on whether or not my R5 5500 will be enough for that setup, but it'll help if I can specifically give the VM 4c/8t that the host OS will not try to use while they're allocated.
 
Last edited:
Top