Digital Imaging- Pixels and Sensors
Our best attempt at an easy way for you all to truly understand digital resolution...
One of the most frequently asked questions by customers buying their first digital camera - new or used is “How many megapixels?” and usually followed by “is that a lot?” or “what does that mean?” or “how big a print can I make?” The basic gist of the 2nd set of questions is really “How many is enough?” (1 megapixel == 1 million pixels) The discussion of Pixels is usually referred to as “resolution” of the image.
Since the term “pixel” will come up a lot in any discussions involving digital imaging (camera, printing, scanning etc), a quick primer is in order here. A Pixel (PIXture ELements) is the basic building block of any digital image. All digital images are made up of pixels – cameras capture images on their sensors in pixels; printers strings pixels together with ink to form an image. The logical equivalent of pixels in images is the individual alphabets on a page of text. String individual alphabets together on a page and you can make up words, sentences and paragraph. The sensor in a digital camera will capture light image cast on it by the lens and convert the light into pixels of information – each pixel representing color and brightness. The computer chip in the camera will combine pixel information and write a file onto the media card. In practice, there may or may not be a 1-to-1 correspondence between a Pixel of information and a Light Sensitive Sensing site on the sensor. Some sensors combine multiple sensing sites to produce a pixel of information. On the output side (printer or monitors) a few drops of ink or multple clusters of tiny LED lights may be combined to display a pixel of information. So, regardless the technology involve, pixels are the basic building block of all digital images – from point of capture to output – and once captured, they can be manipulated, modified and deleted. But keep in mind that information once captures/recorded can only be destrroyed but not created – ie you cannot create more information for the colletion of pixels you captured but you can throw them away (aka “cropping.”)
With this background knowledge, the obvious answer to the question “how many pixels is enough” is a resounding “as many as possible.” But in practice, how many do we really need?
The answer depends on what you want to do with your images because of the practical limits of the technology used to present your images. The older crappy CRT (tube) monitors are capable of about 70 pixels per inch (ppi) whereas modern flat panel (LED/LCD) displays are around 90ppi. If you have an Apple product, you are spoiled - all their displays are much higher than 120ppi. The new iMac Retina screen displays at 325ppi and non-Retina at over 200ppi. However, what is even more important is that when images are viewed on a screen, no one expects to see a huge picture and we all view our screens from further away than when viewing printed material - so the pixel count required for displaying images is a lot less demanding than for printed output. The rule of thumb (“back of the envelope”) estimation is 100ppi for images on screen. So, if you display a 800 x 600 pixel picture on a monitor it will show up as approximately 8” by 6” picture when viewed at 100% If you view the same image on a 200ppi screen, the 800x600 pixel images will be 4” by 3”. In reality, we seldom view images on the screen at 100% and when we don’t the computer will “squish” the image for you by various algorithm to throw away unwanted pixels to make a good looking smooth image. Then what is the advantage of capturing an image of 6000x4000 pixels (24megpixel)? Having more information means that you can crop into the picture without getting below the threshold of an acceptable size for viewing. Think of cropping as throwing away recorded information. The more you crop into your image to highlight or compose your picture, the more information you are throwing away. So, if you start out with more information, you can afford to throw away more.
Now, we get to the Art of Photography and the importance of composition. If you compose your picture properly you don’t have to crop as much - you are not throwing away information captured. If you are sloppy in your camera technique, you will have to “compose" in post-production - aka “crop the heck out the image to make it look good!” That’s why you should read up on the art of photograghy, on composition and to invest in the appropriate lens to get most out of your photography - compose it correctly in the camera and you don’t have to throw so many pixels away in post-production.
One very important distinction to note when discussing resolution is that PPI (Pixels Per Inch) is NOT the same as DPI (Dots Per Inch) even though these terms are often used interchangeably – but wrongly. Pixel is the smallest logical unit of an image. Dots may refer to the actual drop of ink a printer can print or the smallest light source that a monitor can display. The new photo Epson printer, for example, boasts the ability to print at 2880 dpi – ie it can put out such fine drops of ink and move the paper and head in such minute steps that it can put 2,880 drops of ink per inch of paper. That’s not pixels – it means is that it will use multiple drops of ink to construct a pixel of image. The best Epson photo inkjet printer will print at only 360ppi. The same goes with display monitors - there are millions of LED/LCD lighting dots on your monitor screens and a clusters of them are lit up to display one pixel of your image. A historical note is in order here: the original HP Laserjet Print I – the very first laser jet printer indeed used only 1 dot of toner per pixel – thus for that printer, PPI is the same as DPI and thus the origin of the confusion. Today, of all the digital imaging devices, only scanners have a DPI to PPI equivalent of 1 to 1, ie if you scan your images on a scanner, the scanner DPI setting will give you the PPI resolution.
Another one of the digital camera specification we have to explain to our customers most frequently is sensor size. The most popular “standard” sizes are “Full-Frame”, “Crop Frame” also knowns APC size and Four-Thirds (4/3). Many other cameras, especially Point & Shoot class of cameras have their own smaller size sensors. The Full-Frame (Nikon calls them FX) sensor is the same size as the 35mm image frame, ie 36mm wide by 24mm high. The Crop-Frame or APC (Nikon calls them DX) sensor is about 75% the size of the Full-Frame, about 24mm wide by 14mm high with some slight variation among different brands. The 4/3 sensors are about 50% of the size of the Full-Frame, about 18mm wide by 12mm high; again with some slight variation among different brands.
So, what is the effect of imaging with different size sensors? The image sensor in digital cameras consists of basically a large (millions) array of light sensitive sites to collect light. Thanks to Albert (Einstein that is) we can think of light as particles of photons. Photons hitting the sensors cause electrical impulse to be generated and by recording and organizing these electrical impulses, your camera’s computer assembles the image for you. The array light sensing sites on the sensor is organized into clusters. The number of clusters in your camera determines the total number of pixels your camera can create for an image. So, obviously the large size sensor can have more sensing sites than the smaller sensor. The sensor array is really the eyeball of your camera and it is the most expensive and most challenging part of the camera to produce. In the early days of digital cameras, it was very expensive and a major engineering challenge to produce large sensor arrays at a commercially sellable price, thus the early digital cameras had small sensor arrays. As technology progressed and cost reduced, producing full 35mm x 24mm size (35mm film size) sensor arrays became economically viable and the race for more pixels began.
The race for more and more pixels came to the attention of photographer when the Sony A7 mirrorless with 24 megapixel (mega=million) was followed by the A7R with 36 megapixels and then followed by the A7s with only 12 megapixels. What’s going on here? Well obviously quantity is not everything!
Everything being equal, Sony can use larger pixel sensor to pack 12 megapixels than 36 megapixels on the same size surface area. The advantage of using larger sensing sites is higher sensitivity and here's why. Again thanks to Albert who figured out that the amount of electrical current generated is proportional to the number of photons hitting the sensing sites means that collecting more photons translates to higher level electrical signals. (He was awarded the Nobel prize for formulating this “photoelectric effect” and not for his more famous Theory of Relativity!) The analogy is catching rain with a 12” bucket versus a 5” coffee can. Leave them both out in the rain for the same amount of time and the 12” bucket will have collected more water than the 5” coffee can. Why is this important? Because the more signals a sensor can generate means that the circuitry needs to amplify the signal less, and the less amplification, the less “noise.” Electrical noise translates to graininess. It’s like the volume control in your radio or hifi - turn up the volume and the amplifier will amplify both the signal and noise – you’ll notice the noisy “hisses” only when you crank up the volume. With initial stronger signal level, less amplification, thus less noise. When you crank up on your ISO setting, you are turning up the “volume” control. The A7s, because of its large sensor size can famously shoot at very high ISO up to 512,000 with minimal noise because it started out with large signal. The bottom line is that the large the sensor, the more light can be captured which give the camera a better performance in low light situation when your camera is set a high ISO. There are also advantages associate with the field of view and depth of field which we will cover in subsequent articles.
Written by Yau-Man Chan