I can see how the chip/algorithym can effect the anser but I don't think the inout specs matter much. Let me try to explain the question better.
Say I am using a set-top DVD recorder to record a letterbox movie from a 4x3 source on any over-the-air TV channel. I hope there is a standard bitrate/resolution, if not pick any one commonly used.
The DVD recorder is producing 720x480 frames. With a full screen 4x3 movie all the frame will have "picture" in it. With a 16x9 letterbox movie, only 3/4 of each frame has "picture" in it. I realize the DVD recorder doesn't know its letterbox. As far as it knows, its a full screen picture that just happens to contain a black band at the top and bottom of each frame.
The set-top recorder clearly uses x bytes to store each frame on a DVD. I'm basing that on the observed fact that no matter what I record, it will run out of space on a DVD after a precise number of seconds of video.
Since 1/4 of each input frame is large contiguous black bands, that portion should compress like crazy, leaving more space for the 3/4 of the frame that contains the actual "picture". So the algorithym doesn't have to compress the "picture" part of each input frame as much to meet the output frame size target. Less compression loss should mean a better picture.
I am positive that a letterbox movie COULD have a better picture if the algorithym was smart enough. Are they? I don't know if the MPEG2 specs dictate that or if different chipsets may do it differently.
And I see the same situation with B&W input. I am assuming that after decoding the input signal, a B&W frame is essentially just all 3 color values being identical. As such, it would seem that it would also compress much easier and result in less loss due to compression to meet the output frame byte size target. That would result in a better picture compared to a video where the 3 color values are different for each pixel. Again the question is: Is this true in real life or just theoretical?