Charge-Coupled Device (CCD)

History and Explanation of the CCD

An explanation of the history of the CCD, when and who invented it, how it works, and why it is important for the modern world.

Episode #4-22 released on February 16, 2014

Watch on Youtube

The modern world owes much to a small group of very important inventions. Much of the internet would still be boring without pictures, so the invention that I will be talking about today is the CCD, otherwise known as the charge-coupled device.

The charge-coupled device is a very important part of the modern world as it allows for many of the advancements we currently benefit from today. let's start from the beginning.

Who created the CCD, and when?

The charge-coupled device was invented by Willard Boyle and George E Smith, at AT&T Bell Labs, in 1969.

What does the CCD do, and how does it work?

In the CCD we use specifically to capture images, there is a photo reactive area known as an epitaxial layer of silicon and a transmission region made out of a shift register, known as the CCD. An image would be projected with the aid of a lens on the capacitor array, a photo reactive region, causing each of the individual capacitors to collect a charged relative to the amount of light.

Where is the CCD used, and why is it used in those ways?

You have single dimension and two dimensional CCD arrays that can be used to capture still images in both cases, however for photography and film two dimensional arrays are used.

Scanners, can read still pictures one line at a time. Having a single dimension of array allows the scanner to scanner the photograph progressively.

Still cameras and video cameras have to capture the entirety of the scene in a single pass so they require a two dimensional CCD array to capture action in motion.

What is better, higher resolution or bigger CCD sensor?

The war of pixel density has been raging for years. The more pixels, the larger the image, the more is supposedly captured, and the better the image should look. However, this is not true. Having a bigger sensor, and allowing each of the individual capacitors in the CCD to be bigger, allows them to capture more light, and then be more accurate. In that case, a photo from the iPhone 5's 28 megapixel camera, is larger and better looking than the original cameras, but fails in comparison to the latest 8 megapixel camera in the iPhone 5S.

What is the difference between CCD and 3CCD?

The single CCD sensor captures all the light information on a single sensor. While, it is good enough, it is not really all that accurate. Modern scanners, for example, have already moved to a 3CCD array to more accurately capture all the colors in still pictures, and they use the 3CCD in a single dimensional array.

3CCD arrays have the red, green and blue light separated and projected on three separate CCD sensor arrays which allows for more accurate color capture, and combined with a proprietary algorithm that allows for the recombination of all the image data to recreate a more accurate scan of the scene whether a static image or video.

Now, why does this all matter?

Without the CCD, we have no scanners, no video cameras, no mobile pictures, the internet doesn't have any pictures, desktops don't have backgrounds, Youtube doesn't exist, home videos and photography return to film negatives, etc... Basically, everything we take for granted today, wouldn't be around.

Host : Steve Smith | Music : Jonny Lee Hart | Editor : Steve Smith | Producer : Zed Axis Productions

Sources & Resources

Community Comments

Share your thoughts, opinions and suggestions

Login or Register to post Your comment.