Re: Blurry picture

From: Justin <>
Date: Thu, 17 Nov 2011 08:29:00 -0500
Message-Id: <>
On Nov 17, 2011, at 05:54 , Hársfalvi Levente wrote:

> Hi!,
> On 2011-11-17 00:24, Rhialto wrote:
>> Now it turns out that the resolution was so good, that the colour signal
>> is visible in the luminance. And some bright people have worked on
>> restoring the colour information from that. Unfortunately, the
>> information I could find about it is a few years old and it seems no
>> more recent developments happened. See .
> That's very neat indeed! (I've just read the article. One would think,
> an attempt like that would need a film scanner of very high resolution
> at the first place, in order to oversample the image so that adjacent
> rows and geometry could be restored with as little hassle as possible...
> Seems like they just had to fight with the additional obstacle of not
> having such digitizer, which must have made things much more difficult).
I have some experience with the capture devices used in the film industry for old films due to being around the scanners that Warner Bros used to scan in their back archive about ten years ago.  They were more or less capable of resolving individual film grain on each frame of film.  For recent masters of crap movies they just whizzed them through the machine and would happily reverse to recapture a section that didn't get scanned well (I think Encino Man was the example I heard about) but for classics (Casablanca) they methodically stepped through them one frame at a time and only ever ran them through the scanner in one direction in order to minimize mechanical load on the film itself.  Each frame was captured into a massive targa file onto a large storage array, and were archived using a Petasite tape robot, with tapes moved to offsite cold secure storage periodically.  Individual frames were concatenated and compressed into target formats - MPEG2 VBR at the appropriate data rate for DVD, with the bit rate set to meet the capacity depending on whether it was a good enough movie to be pressed on dual layer vs single layer.  By capturing them uncompressed and archiving them that way, they were able to pull the tapes later and re-run the compressions for things like releasing the movie on Bluray later on without having to pull the film from the cold storage vaults again. You can see the benefit of this effort if you look at the very first HD movie releases - Casablanca was an early release on HD-DVD and has incredible detail - you can see tiny creases in Bogart's dinner jacket etc. I don't imagine that this 10+ year old technology is beyond the BBC, in fact I'd bet that they have done exactly this with their entire film archive at this point.

My guess is that his problem is one of access and not technology, and that the project wiki is dead because the BBC is doing this on their own now.  The kind of re-colorization he is attempting can likely be performed in real time on a video stream with modern computing resources once you figure out the algorithm for decoding color from the artifacts.  This problem was just barely solvable 16 years ago when I was doing real-time edge detection and overlay on NTSC video using a massively parallel supercomputer (Princeton Engine configured with 1024 processors).  My algorithms from back then can easily run in realtime without really stressing a modern processor so I think this problem is solvable with relatively low compute resources.  I think his single frame approach is lossy because the change in his chroma artifacts from frame to frame is probably usable data that he is ignoring (IE I'd bet the frame-to-frame deltas in the artifacts are noisy but usable data for color tracking purposes).  I would attempt to re-calculate the color from the artifacts by using his single frame calculation in combination with what the compressor will see as motion artifacts to create a probabilistic model of the chroma across frames.  This could be fully automated (and with a very limited AI it could be trained or learned - possibly at lower developer cost than understanding and building the perfect algorithm), or each scene could be manually colorized (or tweaked) at the beginning and then the algorithm could "follow" by calculating color changes across the scene as things move around using the existing algorithmic approach to detecting motion of luminance blocks for compression purposes.  With an "uncompressed" target output, the motion detection in the compression algorithm could be tuned for nothing more than moving the chroma overlay around to generate an uncompressed but colorized output for each frame (having masked the chroma artifacts of course).

> People who use C64s with decent CRT displays and either RF or composite
> connection have for long noticed the color artifacts that generally
> appear around sharp luma edges. This is a consequence of above - more
> specifically, the consequence of luma's overlapping to the color
> signal's band at the point of sharp luma transitions (that are then
> decoded as color transitions in the display).
> Also, if someone creates a pattern of black and white stripes on the
> screen, one pixel of width each, he'll see a color gradient on top of
> the stripes on composite displays... As explanation, the pixel clock of
> the PAL C64 is 16/9 the color subcarrier frequency. A series of black
> and white pixels is therefore a square signal whose base frequency is
> half that, in other words 8/9 of the color subcarrier frequency - very
> close, almost equal. This signal will definitely be catched by the
> chroma separator in the display, and get displayed as color. As this
> signal'd be constantly shifting in reference to the PAL burst (since it
> is "slower" than that of PAL's nominal frequency), the result is a
> gradient of constantly changing color. From the proportion, we could
> also conclude that the gradient is periodic for 16 pixels.

On a more Commodore related track, I suggest anyone who hasn't try a small LCD TV that has a good video processor in it using split-composite.  Even the high end processors are filtering down into cheaper small sets that can be used as monitors.  If you have them in all-singing all-dancing mode (IE de-interlace, pixel interpolation for scaling, filtering etc), they can do amazing things to bring an almost surreal crispness to text mode and graphics on the 64.  This is not always a good thing - a little bit of bloom around sprites on an old CRT seems to let the visual cortex imagine in some detail that is not there - but it can be beautiful for some applications.  I have only played with this on NTSC so I can't say how good the chips are at doing the necessary filtering on PAL, but I imagine they are pretty good.
       Message was sent through the cbm-hackers mailing list
Received on 2011-11-17 14:00:03

Archive generated by hypermail 2.2.0.