Monthly Archives: December 2011

Pixel counts and reality

I wish that the Foveon sensor was available for moving images.

Unlike CFA Bayer pattern sensors it gives you a full RGB signal, if the sensor is 4K * 2K you get a full 4* 2K in all the layers RG&B unlike Bayer pattern sensors where you get 2K * 2K of G and 2K * 1K of R&B.

The marketing games that are played with Bayer pattern sensors are quite amazing.

Yeah yeah you can calculate the missing information from what is around it but that’s the bloody point! you calculate it, it’s not real!

Monitoring on set

When I shot film we had video assist on set and everyone, well nearly everyone, knew it wasn’t going to look like that.

There was a problem when we went from Mono video assist to colour, clients would then query the colour of their product. They had never worried about it when we had mono video assist, they trusted the cinematographer but now we had colour that trust was undermined.

With HD video cameras what you saw was very much what you got and people started to rely on the monitors, lot’s of people decided they had the right and the skills to “help” the cinematographer.

I may not have liked this too much but they were commenting about a “real” picture. Probably in less than ideal monitoring conditions on a less than ideal monitor but…

The we got digital cameras that recorded RAW images, images that needed processing to see what was actually there. The HD video output of these cameras is a guide, but a guide only.

Of course people continued to make decisions based on the output of the camera monitoring systems, something that was now pretty much video assist again.

They would attach waveforms to the output and make exposure and colour decisions based on that and what they saw on a monitor.

This of course ignored the simple fact that what they were monitoring had very little relationship to what they were recording, hey! we’re back to colour video assist with film!

Just try thinking for a second, if a RAW image needs rendering to be able to use it in post and if that render is in less than real time on a powerful computing system just WTF do you think the tiny amount of processing in a camera is going to give you the same result.

Please, engage your brains for just a second.

Close-up close-up close-up

I’ve just watched what would otherwise have been a great documentary about Roy Lichtenstein but unfortunately the only time we actually saw a full view of any of his images were the 4 or 5 in the end credits.

I don’t blame Anna Boyle, no relation, the Cinematographer, it’s the director who makes the decisions.

I know what a bloody dot looks like, I’d love to have seen how they were used to create a complete image.

Nearly every BBC documentary I see now is ruined by one of two things, either the obsessive use of the close-up with no establishing shots. Or the excessive use of footage of the presenter. I want to see the subject not some nonentity talking about it.

A recent series on Royal palaces could have been wonderful, the information that the programmes contained was fascinating but all we saw were shots of Fiona Bruce, her feet, her ears, her hair, her silhouette, wide shots, close-ups, walking shots, sitting shots, apparently she was in some kind of palaces, you’d never have guessed.