Colour TV

color bars on old tv

It is traditional to give facetious translations for the colour system acronyms:

  • “Never Twice the Same Color”.
  • “Pictures At Last” (refers to the amount of time it took for Europe to get colour TV).
  • “Pay for Added Luxury”
  • “System Essentially (Entirely?) Contrary to the American Method”.

NTSC

All analog colour TV systems are based on the NTSC system, which for its day, was a brilliant feat of engineering. The idea behind it was that to transmit color TV, it wasn’t necessary to transmit separate channels of Red, Green, and Blue. Three channels are necessary, because our eyes have three types of colour receptor, but these can be made up of combinations of R G and B, such that one channel corresponds to a monochrome picture, and the other channels tell the TV how to deviate from monochrome in order to recreate colour. We take it for granted now that a black and white TV can tune to a colour signal, but the systems proposed before this invention were not black and white compatible.

The signals used are called Luminance (Y), (where Y= 0.59R + 0.3G + 0.11B, the combination which simulates white light), and colour difference R-Y and B-Y. The colour difference signals together are known as Chrominance or Chroma. The Y-channel is simply modulated onto a monochrome TV waveform, to provide a black and white compatible TV signal, and the chroma has to go somewhere else. The obvious thing to do with the chroma, is to put it into a couple of spare TV channels next to the monochrome one, but a few clever tricks make such a bandwidth-wasteful approach unnecessary. The first observation is that the eye is less sensitive to detail in colour (cone cell vision) than it is in monochrone (rod cell vision), so the two colour difference signals can be reduced in bandwidth so that they each need only half the width of a TV channel (or less). So now, we’ve fitted the colour signal into only two monochrome TV channels, but better still; two signals can be squashed into the space of one by using a scheme called Quadrature Amplitude Modulation (QAM) – the method used nowadays to get large(ish) amounts of data to travel through telephone lines. So now we’re down to one and a half TV channels, but then the ghost of old Jean Baptiste Fourier comes to show how to get it all into one.

If you look at the spectrum of a TV signal, you find that it is not continuous, but is made up of spikes, at multiples of the line scanning frequency. Each of the spikes has sidebands on it, spaced off at multiples of the frame rate, but the frame sidebands are weak compared to the line harmonics; and so the spectrum essentially has hundreds of large gaps in it, each easily big enough to fit an AM radio station. Chroma signals are also TV signals, so they have the same type of spectrum, so the trick is to modulate the chroma onto a subcarrier which sits exactly half way between two of the spikes of the luminance spectrum. In this way, the chroma signal can be placed right in the video band, and all of the spikes of its spectrum fit between the luminance spikes. This technique is called interleaving and, in combination with interlacing, results in a TV waveform in which the subcarrier phase gets back to its starting point after four fields (2 frames). Modern video-tape recording was in the process of being invented at the time by one A. M. Poiniatoff (par EXcellance)*, with the financial backing of Bing Crosby. The RCA engineers may not have known it, but NTSC would turn out to be ‘electronic editing friendly’.

To get the chrominance signals back out of the TV signal, the system designers resorted to a trick called synchronous demodulation, – a method used by spectoscopists and others to recover signals buried in noise. There was one small problem however, which was that the subcarrier was visible as tiny dots on the screen, and while colour tubes of the day were too crude to reproduce them, they could be seen on black and white sets. The solution was to use a trick developed for short-wave radio communication, which was to use suppressed-carrier amplitude-modulation for the chroma. It may sound surprising, but the carrier signal of an AM radio transmission carries no information. A lot of transmitter power can be saved by leaving it out, as long as it is re-inserted in the receiver prior to demodulation. A short-wave radio had a carrier insertion oscillator or beat-frequency oscillator (BFO) for this purpose. The operator simply tweaks the tuning to get the oscillator in the right place, et voila – the signal becomes intelligible. For a QAM signal however, and for synchronous detection, both the frequency and phase of the carrier must be re-established, so in NTSC, a small reference burst of carrier is sent just before the beginning of each line. Using suppressed-carrier QAM was a brilliant idea, because it meant that in areas of the picture where there was no colour, there was no colour signal either. The dot interference was thus greatly reduced, and effectively confined to areas of high colour saturation only.

NTSC achieved a respectable 3:1 bandwidth compression, in an age when valves (vacuum tubes) were the dominant technology, and no one had yet made an integrated circuit, let alone a DSP chip. It was also very daring, using every analog signal processing trick in the book; and to cap it all, it worked. It is not perfect however, and suffers from two noticeable defects:

  1. When the video signal is rich in frequencies which lie in the colour channel, luminance leaks into the colour decoder and gives rise to psychedelic effects. For this reason, checks and pin-stripes will always be out of fashion in NTSC TV studios (and PAL is inherently worse). The effect is called ‘color fire’ or ‘cross color’, and can be eliminated by modern signal processing techniques. The TV set has a ‘color killer’ circuit, to prevent cross-colour from appearing on monochrome pictures, although nowadays TV companies tend to sabotage black and white films by leaving the colour burst switched on..
  2. When the composite NTSC signal suffers from distortion in the transmission chain, the QAM signal is skewed in phase, and hue shifts occur. An NTSC TV needs to have a Hue control, to get flesh tones looking right, but even this cannot fix the brightness dependent differential phase distortion which sometimes occurs. The NTSC knew about this, and an alternative scheme called Chroma Phase Alternation (CPA) was suggested as a solution. CPA was based on the observation that if one of the colour difference signals (e.g., R-Y) was inverted on alternate fields, then any hue errors on alternate lines of an interlaced frame would be equal and opposite, and if you stood back from the screen, pairs of incorrectly coloured lines would average to the correct colour. The problem was, that if phase errors were bad, they gave the picture a flickering ‘Venetian blind’ effect, which could look a lot nastier than a straightforward hue error. The NTSC decided that the marginal benefit of CPA did not warrant the added complexity.

PAL

As the American NTSC system reached the marketplace, other countries, notably in Europe, were working on their systems. In particular, a team at Telefunken, under the direction of Dr Walter Brüch, was working on an ingenious modification to the NTSC system which involved inverting the phase of one of the colour difference signals on alternate lines. They called the system PAL, which stood for Phase Alternating Line, or something like that. The problem with the PAL method was that, if the chroma phase errors were bad, they gave the picture a revolting ‘Venetian Blind’ effect, which they called ‘Hanover Bars’, after the town in which the effect was ‘first’ discovered (The NTSC almost certainly considered both line and field CPA – but would have rejected the line version on the grounds that, over a whole frame, it exacerbates the Venetian blind effect by producing a pair of lines of one hue followed by a pair of lines of another). The solution was to average the hue errors electronically, by taking the TV line coming in off air and combining it with the previous line stored in an analog memory (un memoire). The original memory was a device called a ‘delay line’ (line as in wire, or cable, not TV line, even though it stored almost exactly one TV line), a cumbersome and lossy collection of coils and capacitors designed to simulate the time delay of a very long cable. This was soon replaced by a small block of glass with two piezo-transducers glued to it – an ultrasonic delay line.

The PAL variant of NTSC needed a few tweaks to turn it into a viable standard. In particular, the dot interference with a half-line colour subcarrier offset was exacerbated by the phase inversion process, which caused the dots to line-up vertically. The solution was to move the subcarrier to a position a quarter of the line frequency away from one of the line harmonics (actually 15625 x 283.75 + 25 Hz = 4.43361875 MHz). This is something of a compromise, because the interleaving is not so good. This reduces the signal to noise ratio of the synchronous demodulation process, exacerbates colour fire, and gives highly saturated parts of the picture a crawling appearance. The quarter-line offset, with interlacing, also results in a subcarrier which returns to its original phase after 8 fields (4 frames), which precludes precise electronic editing. This was a small price to pay however, for the opportunity to take out patents on top of the NTSC system and use them to control the European marketplace. The point was not to patent the transmission standard however, which was in any case just NTSC-CPA-H, but to patent the technology used in the receiver.

The Telefunken team described three decoding methods for HCPA (sorry, PAL), which they called PAL-S, PAL-D, and PAL-N (the N in this case stands for ‘new’ and is nothing to do with TV system N used in South America). PAL-S (simple PAL), was the “let them stand back until the Hanover bars aren’t noticeable” approach, which couldn’t be patented because of the NTSC prior art. PAL-D was the basic delay-line method, and PAL-N or ‘Chrominance Lock’, was a more sophisticated delay-line method which could track and cancel differential phase distortion, without the loss of colour saturation which occurs with the basic D method. Telefunken patented the delay-line methods, and used these patents vigorously in an attempt to exclude Japanese TV manufacturers from the European marketplace. Consequently, until the PAL patents expired in the mid 1970s, all Japanese TV sets in Europe either used the disgusting PAL-S, or were made by Sony.

In the early 1970s, Sony introduced a range of PAL Trinitron TV sets, which had a Hue control like an NTSC set. These were a breath of fresh air in comparison to the dreadful Shadow-Mask TVs of the day, and it was quite a status symbol to own one. The colour decoder contained a delay line. Telefunken sued – and lost. The dreaded Japanese had hit upon a third delay-line method, which was so devilishly simple that only someone whose brain was not saturated with pro-PAL propaganda could see it. Sony used the memoire to store a line so that it could throw away alternate lines and treat the signal as though it was NTSC*. If NTSC was as bad as it was claimed to be, Sony should have been inundated with complaints; but as it was, if you owned a Trinitron set in those days, people came round to your house to watch it with you, and the TV companies adopted the video-monitor versions as studio monitors (despite the EIA tube phosphors – it was the brightness they wanted). The irony was that the most discerning TV owners were watching PAL as NTSC. Sony changed to the PAL-D method when the Telefunken patents expired, and felt obliged to devise a hue control for that, to keep up the tradition. The control didn’t do anything useful, it basically gave the user the choice of whether or not to have Hanover bars, and they dropped the idea fairly quickly.

SECAM

The French system results from a highly pertinent observation, by Henri de France, its inventor; that if you’re going to use an expensive memoire to decode the signal, then you might as well dispense with the troublesome QAM and simply send R-Y and B-Y on alternate lines. He thus came close to a scheme which might have given a pronounced improvement in any environment (studio, editing, and transmission), but the devil is always in the details. There were two technically feasible methods, at the time, for extracting signals buried beneath other signals: one was synchronous demodulation, and the other was the FM capture effect. It is well known that FM radio stations are immune to impulse interference, and the idea was to use this trick to make the colour channel immune to the luminance channel. So much for cross colour, but unfortunately, the immunity is not reciprocal. You can’t suppress an FM carrier, so an FM-SECAM system has dots in parts of the picture where there is no colour, and the dots, are not related to the line structure. Consequently, a SECAM signal makes for very poor viewing on a black-and-white TV set (some would say flatly that it is not black-and-white compatible), and there are further problems in processing the signal in the studio.

Studios working in NTSC or PAL can lock all their cameras to a subcarrier reference. PAL studios also need to lock their cameras to a line identification reference, so that they all produce +(R-Y) or -(R-Y) lines at the same time. When this is done, it is possible to cross-fade between different sources almost as easily as if they were monochrome. This is fundamental studio practice, but it can’t be done if the subcarriers are FM. If you mix two FM signals together, you get horrible interference. The obvious solution was to work with a separate baseband chrominance channel (you only need one with SECAM), but the pragmatic solution adopted by many TV companies was to buy PAL equipment, and transcode to SECAM for final distribution. This is not the cop-out that it might seem however, because SECAM signals are very robust in transmission. (Many TV companies, of course, now use digital systems internally.)

Both the PAL and SECAM systems need to transmit a line identification reference, to tell the TV what type of chrominance information is coming next. In the PAL case, this is done by shifting the phase of the subcarrier reference burst. In the SECAM case, this is done by sending a reference burst in the vertical blanking interval (SECAM-V) or in the horizontal blanking interval (SECAM-H). SECAM-V is the older of the two systems, and the signal can carry both types of line ident for transitional compatibility with older sets. The V-ident signal has to go however, if the TV station wants to transmit subtitles or Teletext.

S-Video

The point about all of the colour TV standards is that they were actually conceived as transmission standards. When you add the colour information to the TV signal, it always degrades the quality of the basic monochrome picture, so there is really no need to do it unless you have to send the signal by radio. It took the video equipment manufacturers a while to grasp this point, but when they did, they came up with S-Video. Prior to that, we had to work with composite CVBS (Chroma, Video, Blanking, and Sync), or separate RGB. S-Video (Separated) is just the C and the VBS in separate cables, but otherwise exactly as they would have been in composite form. If you want to use a monochrome video monitor with a colour camera, feed it with the VBS part of the S-Video, rather than composite, and you will get a picture free from subcarrier dots.

VHS Video Recording

If you are exchanging domestic-format videotapes with people in other countries, the platform of interest is almost certainly VHS. The following points are therefore pertinent:

  1. All 525 line NTSC machines use the same recording format.
  2. All 625 line PAL machines use the same recording format.
  3. 525 line and 625 line VHS machines use the same scanning geometry, they just rotate the heads and feed the tape at different speeds; so they can be made to play back alien tapes if the manufacturer decides to include the facility. This has led to the development of special hybrid colour signals (see next section) which can fool a TV into working at the wrong line standard.
  4. All SECAM recordings are not the same.

Video recorders convert the luminance signal into FM, and record it as diagonal stripes on the tape, one field at a time. The amount of tape which is fed forward as each stripe is written depends on the thickness of the head and the speed at which the drum rotates, which is why 625 (E) and 525 (T) cassettes have different lengths of tape for a given time duration. The chrominance is separated off for recording, and is moved to a sub-band below the luminance, at around 650KHz. PAL and NTSC recorders use a straightforward heterodyning system to shift the chroma down, and shift it back on playback by using a fast VCO (voltage controlled oscillator), which is adjusted by comparing the burst signals against a local 3.58 or 4.43MHz reference. The VCO system thus gives time-base correction to the chroma, and protects the delicate phase information against the vagaries of a mechanical system (i.e., wow and flutter). There is usually no corresponding timebase correction for the luminance however, and so diagonal recordings always have slightly wobbly edges on any verticals in the picture. This problem can be cured by feeding the video signal through a box called a Timebase Corrector (TBC). Some up-market S-VHS players have a TBC built in.

SECAM can be recorded onto standard VHS in one of two ways. It can either be heterodyned down and back; or since it is FM, it can be treated as a string of pulses, divided by four to get it down to the sub-band, and multiplied by four to get it back. The divide-by-four method is most common. The heterodyne method is called MESECAM (which I think stands for ‘Middle-East’). S-VHS recorders don’t use either of these methods however; they transcode to PAL for recording, and transcode back to SECAM for playback; which means that S-VHS is compatible across all 625-line PAL and SECAM countries (but unfortunately not well established as a domestic VTR format).

Hybrid Playback Standards

NTSC-4.43, PAL-525, and NTSC 625

These are not transmission standards, although they do come out of RF modulators. They are used to enable some VCRs to play back tapes with the wrong line standard. They all exploit the fact that the 625 and 525 line systems have similar line frequencies (15625 vs 15734Hz) Thus a monitor or TV can usually sync to either, with a small tweak of the vertical hold to make up the difference between 50 and 59.94Hz. The purpose of the hybrid standard is to get the colour to work as well.


NTSC-4.43 appeared in the 1970s, as a way of enabling Sony U-Matic PAL VCRs to play back 525 line NTSC tapes. The reproduction quality is excellent, but the system requires a special type of monitor.
PAL-525 (Mitsubishi & others), involves recoding the NTSC signal as PAL, on a 4.43MHz subcarrier. This works with almost any 625 line monitor or TV, but the decoder delay line is 0.44 microseconds longer than the actual lines, and this causes decoding errors at colour boundaries in the picture. The results are generally acceptable nonetheless.


NTSC-625 is a simple matter of unscrambling the PAL signals and re-coding as NTSC-3.58. There are no inherent problems other than that the chroma interleaving is not optimal – which doesn’t matter at all provided that S-Video is used for the link between the VTR and the monitor.

Source: http://www.camerasunderwater.info/engineering/tv_stds/colortv.html

Similar Posts