For any type of input signal feed into any type of tv/capture card, the signal flow and processes which must occur are pretty much the same, albiet with some variation depending upon the application (i.e. digital vs analog; software vs hardware). Making and understanding the distinctions of these processes goes a long way in understanding the nature of all things video capture related. I will do my best to elucidate those and provide a general conceptual framework.
Lets consider an analog TV input signal
Analog TV transmissions are composite (YUV) singals placed on an RF carrier. In order to display the video signal on your computer, the cards must very first convert the analog signal into a digital bitstream. The following steps are performed by the analog tvtuner card/device for such matters:
- Step 1: The card's tuner picks up a selected broadcast channel's RF signal, performs Automatic Gain Control and converts it into a standard intermediate frequency (IF) ....the video portion of the signal being denoted by V-IF and S-IF for the audio portion.
.
- Step 2: Next, the IF is then routed through the device's demodulator where the composite signal is recovered from the carrier wave. Specifically, V-IF is input into the demodulator and CVBS (Composite Video Baseband Signal) is output from the demodulator. Similarily, the S-IF is converted to baseband left and right channels.
Note – A receiver = tuner + demodulator. The receiver portion of the device is also referred to as the frontend. Frontends are often entirely contained within a tin can, known as a NIM (network interface module). Although, the move to silicon components will see the tin can designs soon disappear. Likewise, ICs are becoming far more integrated (i.e read multi-funcitonal), so a given capture device might not contain all the individual components (listed in red bold) that are contained in my description. Nonetheless, it is the concepts of the description that is important, as the devices will indeed still perform the same operation stages no mater whether one, two, three, or many IC's perform all the necessary tasks. So, in essence, what I am describing is the generalized design ... think of it as the block diagram.
.
- Step 3 - This CVBS is then routed to the card's decoder chip. The decoder performs analog digital conversion (ADC), where by the composite (YUV) signal is converted into an uncompressed digital bitstream (either RGB or YcbCr)...similarly, the audio signals would be digitized by a decoder .... if a single decoder IC is used for the ADC of both audio and video, then it is referred to as (surprise, surprise) an A/V decoder
Note - in the computer world, YCbCr is commonly, yet incorrectly, referred to as YUV when spoken of in terms of codecs etc
Note – here's the subtle key about what constitutes “encoding” and “decoding”:
- Decoding transforms a signal into a “raw” bitstream that can be natively rendered to the system's display device (i.e. an uncompressed signal in the RGB or YCbCr colourspace)
- Encoding transforms a signal into a form which can not be natively displayed on the system's display device ... generally speaking, compression formats like mpeg2 etc etc
So, as in the above description, since the analog YUV signal is converted into an uncompressed RGB digital bitstream, it is referred to as a decoding operation, and hence why the particular dedicated hardware IC on the capture card for this task is called a decoder.
Conversely, with a video card that has TV-out, an encoding operation is performed whereby an analog YUV (composite) or Y/C (s-video) or YPbPr (component) signal is created from an uncompressed RGB bitstream; and hence why you find a TV-encoder (DAC) on the card (if such functionality is not natively incorporated into the GPU of the video card)
.....note that an encoding operation kept strictly in the digital domain was referred to just above (i.e converting the uncompressed RGB or YcbCr bitstream into a compressed bitstream format; like conversion to mpeg2 etc)
.
- Step 4 - As mentioned above, in respects to the video signal portion, the resultant output from the decoder IC is a raw component bitstream, and it can be sent over the system bus (PCI, USB ...depending upon that which the capture card/device is situated upon) and piped to:
a) the video graphics device for immediate display or
b) Alternatively, one can save the digitized stream to file on the disk. Saving the file on disk obviously allows for later playback (either much later, or perhaps soon afterward the capture ala PVR style for timeshifting functionality etc). Unfortunately, even in the case of standard definition resolutions * the raw, uncompressed digitized bitstream tends to be quite large. For ease of storage and playback, the stream is generally saved in a compressed format (such as MPEG2 etc), necessitating the need for an encoder
Note - In PCI designs, it is the A/V decoder that functions as the bridge device to the PCI bus. In USB designs, a USB bridge IC will usually be employed, as A/V decoders currently (generally) lack such interfacing capabilities
Note - * standard definition refers to digital formats of specified resolutions ... in this case, it implies that the analog signal has been converted to a digital file format ... quite often you will see individuals refer to analog as sdtv, but analog is not sdtv (analog is, er, analog!). But you most certainly can capture an analog signal and convert it to a digital standard definition format (compressed or uncompressed).
.
- The encoder can be either software based (i.e. the host cpu performs all encoding tasks based on a particular software encoder's algorithm) or come in the form of a dedicated hardware IC contained on the capture device. In the case of a hardware based solution, the signal flow is from the A/V decoder to the encoder IC, which will have some associated RAM for buffering and calculation operations. Afterward the encoding operation, the compressed bitstream can then be routed across the appropriate system bus (so either the bridge work is handled by the encoder, or the encoder flows the processed signal back through to the A/V decoder for a conduit to the system bus).
Note 6 - Those who want PVR functionality tend to choose capture cards with an onboard hardware MPEG2 encoder (there are a large number of such cards). Hardware based encoding cards greatly assist timeshifting capability (and, as well, most viewing apps don't support timeshifting unless a hardware encoding card/device is used). As the host cpu in the system does not have to end up performing the mpeg2 encoding, it is of course free to perform other duties.
Note - Not all encoders are created equal. In particular, with MPEG2, this is true due to the fact that encoding processes are defined by a framework, rather then by standardization. This is true of both hardware and software solutions. Having a hardware solution does not guarantee a superior quality capture over a software solution --- I say this only because far too often the PVR community throws out blanket statements like "hardware cards produce better quality then software cards", in a manner of speaking as if having hardware somehow (because of some law of physics) magically performs better. Precluding software based cards from being capable of producing quality captures is simply preposterous. What should, more properly, be noted is that hardware solutions, like the Hauppauge etc style devices, are capable of producing excellent quality captures and are ideal for PVR applications. Regardless, real-time encoding is obviously one pass based, so even greater image quality can be obtained from capturing raw uncompressed streams and then doing a multi-pass encoding.
A (somewhat) brief note on playback if Step 4b is performed
When it comes to the highly compressed file formats that you saved to disk (i.e. “captured”), the compressed bitstream needs to be decoded (uncompressed) if you are going to play/view it on the system's display device (i.e. you have to turn it back into what the system can natively handle -- that raw, uncompressed, YCbCr or RGB bitstream). This is the job of a
decoder
Note - no ADC is involved in this case (unlike with an A/V decoder) as all operations remain in the digital domain.
The decoder can be either software (i.e. performed by the host cpu following some software algorithm) or hardware based (i.e a discrete decoder IC that contains the necessary algorithm logic).... or a bit of both (i.e. when offloading parts of the decoding routines onto the video card GPU is supported by the software decoder, video card, video card drivers and video playback app).
Very few hardware decoding based solutions exist. In terms of TV capture cards, if they possess a hardware decoder IC, it invariably means its a
MPEG2 decoder. Off hand I can only think of one or two (analog) TV capture cards that also have a hardware mpeg2 decoder (ex. Hauppauge PVR 350). And a couple of other specialized cards like the Hollywood+ etc. etc. Note – one or two devices do have IC's that support decode of standard definition mpeg4 ASP bitstreams, but they are most certainly the exception (ex, Plextor capture devices).
Anyway, in the case of a device with an onboard hardware based mpeg2 decoder, for playback, the compressed mpeg2 format bitstream is sent from the disk across the system bus to the card/device containing the hardware mpeg2 decoder. After the stream is decoded, it is routed straight
from the capture card/device to the display device*. You don't send decoded video back across the system bus to your video card for output handling because the signal is now back to an uncompressed state, and uncompressed equals big badass bitrates (even for SD resolutions) that gum up the system bus if you're already simultaneously streaming the compressed file from disk to the card for decoding! ... let alone running anything else on the bus (like networking functions etc etc).
Note – * if you're intended viewing device is an ordinary TV, then digital-to-analog conversion (DAC) is necessary and would be performed by a
TV-out encoder IC
It might be prudent to note that while hardware decoding might sound attractive, the reason why MPEG2 decoding for standard definition resolutions is usually software based is because host cpus are literally behemoths nowadays and scoff at the task that once brought their not so long ago predecessors to their knees...okay, maybe it is still somewhat challenging, but other improvements have also been introduced into the decoding chain. Specifically, and despite their namesake, a lot of software decoders actually unburden the host processor by shifting much of the decoding processing over to the dedicated hardware of your system's graphic adaptor. I don't think there is a video card sold today that isn't capable of providing some sort of mpeg2 decoding acceleration, and most cards drivers for Windows OSes will support that capability (completely different story under Linux etc .... stupid Nvidia (support limited to > GF4), ATI (no support), VIA (retarded limited supported), and SIS (no support) ... but props to Intel for full support of its IGPs!). So most decoders (and playback programs) will leverage those video card mpeg2 decode acceleration capabilities. And this, of course, has the desired affect of much reducing the utilization of processor resources.
Of course, if you originally captured to a compression format other then mpeg2, there is video card playback acceleration support (ranging from good to rudimentary) for a few other compression formats too now (if you're using a Windows OS platform). But I digress.
Perhaps only the other noteworthy tidbit is that not all MPEG2 software decoders (or even hardware ones for that mater) are created equal -- there is leeway for interpretation in the decoding process, and as is the case, some decoders produce better picture quality then others, while perhaps doing so more or less efficiently (as measured by cpu utilization) then their counterparts.
*2
*2 - this is not the case with AVC (MPEG4 AVC). Its decoding process is bit perfect. There is no PQ differences attributable to the different decoders -- PQ difference can only be attributable to any post processing involved. The only thing the different decoders are different in is there level of efficiency...which can vary wildly
So, to Recap the mechanics and dynamics of the analog TV-in signal path
An analog capture card is typified by the following stages:
RF input signal -> [
tuner] -> IF signal -> [
demodulator] -> composite video baseband signal -> [
decoder] -> uncompressed digital component bitstream signal -> which from this point can be either routed to:
1) display device for immediate viewing or to
2) either
- the host processor (as in the case of "software" encoding cards) or
- ii) an on board encoder (as in Hauppauge et al "hardware MPEG2" type encoding cards)
for encoding the stream to a suitable compression format and then routed off to the hard disk (from which it can be played back either much later or in the very near-term (in the case of performing PVR type time shifting
3) When the pathway for point 2 was followed (i.e. encoding occurred) it necessitates that a
decoder be employed in order to achieve playback.
[For support of this purpose, the capture device itself may optionally include onboard a hardware decoder IC and a
TV-out encoder]
Lets consider an analog A/V input signal (fed into either a TV capture or just strictly capture device
Step 1 – When you plug an analog input source - i.e. the video portion is one of YUV (composite), S-Video (Y/C) or Component (YpbPr) - into your capture device, the signal is routed straight to the A/V decoder for ADC operations. For the remainder of the story refer back to Step 3 above for a recount of what this entails and proceed forward, as absolutely nothing else is different from then on in.
Note – I can't think of any TV capture cards offhand with component inputs, although pretty well all A/V decoder ICs can accept such inputs (in fact A/V decoders have a number of different inputs combos... just have a glance at a datasheet for one and you'll see how many they can indeed accept! ... which explains why even older BT878A chips work quite well for multicamera security/surveillence). Anyway, despite the lack of component input jacks on TV capture cards/devices, you can still record component sources through an extremely convoluted method ... convoluted and extreme being the key terms though). There are, however, a number of strictly capture devices that have component inputs ... for the most part, they tend to be USB based devices.
***************
To follow in the near future (hopefully tonight):
- How DTV (OTA, Digital Cable, Sat) cards/devices work
- Capturing uncompressed Hi-Def signals
- and even more unsolicited ranting!