A Photon Saved is a Photon Earned

Now that we’ve passed the winter solstice and are starting to get some more daylight here in Wisconsin, it seems like a good time to talk about how valuable photons are. And not just big groups of photons like what are emitted from lasers or LEDs – I’m talking about the importance of just one or two photons at a time!

Need another reason to care? How about the recent deployment of a SUPER AWESOME new space telescope? The folks working on the James Webb Space Telescope (JWST)*, as they look to find tiny amounts of light from objects at mind-bending distances away (both in time and space!), are also going to be viewing photons as a very precious commodity. Need more reasons? One of the most powerful ways of interfacing with nanotechnology is by making measurements on literally one nanoparticle (or even one molecule!). That’s a part of my own research where photons are super precious!

Here’s a video of single fluorescent molecules acquired via an electron-multiplying charge coupled device camera (EMCCD) (Video from Ng et al. 2016 courtesy of open access) 1

So, now that I’ve hopefully convinced you that photons are important, I thought it might be fun to talk a little about how scientists (and also non-scientists) detect them. The short answer is, we use super sensitive cameras. To understand how that works, let’s first define a camera as a two-dimensional array of pixels, which we’ll imagine as a large chess board-like grid.

Pictured is an image of evenly spaced black squares are arranged in a grid format.
A chessboard-style array of pixels representing a camera. This example has a 3 by 6 grid for a total of 18 pixels. Its job is to detect photons (see next image)

The job of each pixel is to convert any impinging photons into some kind of measurable signal. Now, if all you needed was one pixel, that would be the only job you had to worry about, since everything could be optimized to just make sure you had a really accurate measurement of “yes, there’s a photon” or “no, there’s no photon.” On the other hand, with an array of pixels (ie, a camera), you also have to worry about getting that signal out of each individual pixel in a way that also preserves information about which pixel the photons came from. Otherwise, you wouldn’t be able to form an image.

Pictured is an image of evenly spaced black and white squares arranged in a grid format with alternating colored spaces which are labelled "photon" and "no photon".
Our simple “camera” has detected photons on some pixels but not on others, and indicates which ones are which.

Oh, and you also have to do this in a way that doesn’t create a lot of dead space between the pixels, because that will create holes in your image!

Pictured is alternating black and white colored spaces except one white square in the pattern is substituted with a black square that says "No photon? Or spacing problem?".
Poor spacing between pixels can make the image hard to interpret.

Sounds like a hard problem, but we know there are some powerful solutions out there, because our phones have multiple cameras on them, and they seem to be doing a good job! For example, the new iPhone 12 has a 12 megapixel camera, which means there are approximately 12 million pixels. If this was a square array, that would mean each side of the array has about 3400 pixels (3400 x 3400 = 12 million). Most cell phone cameras are rectangular rather than square, but you get the idea. That’s a lot of pixels to manage!

Cell phones use a type of camera called a CCD camera, where CCD stands for charge-coupled device.

Pictured is a small computer chip smaller than a thumb.
A charge-coupled device (CCD) array (image by NASA)

Here, each pixel is a little photodiode. What is a photodiode? Without going too much into the semiconductor physics, you can think of these photodiodes as little circuits that are missing their mobile “charge carrier” electrons. If a photon hits the photodiode, then wham, you promote a stationary electron into a mobile electron, and suddenly a tiny current flows. You can imagine this as an empty pipe sitting on a slope with a special bucket on top of it: whenever a photon hits the bucket a drop of water is released, falls on the slanted pipe, and a current of water flows! In a CCD, that current is used to charge up a little capacitor (essentially a bucket of electrons that in some ways acts like a battery, though the operating principles are different).

So how does this photodiode technology work for an array of pixels?2 Turns out, the infrastructure needed to read out that capacitor by converting the charge into a voltage can be pretty bulky. Basically, what happens is the current gets shuffled around the chip, down rows and columns, in what is essentially a massive bucket brigade! Then, at the bottom of the CCD there is a single analog-to-digital converter (ADC, the burning house where all the buckets are going to in this analogy). The ADC keeps track of the pattern of buckets as they arrive, which tells it where the signals came from and preserves all of the spatial information.

A black and white picture with people passing buckets of water down a line.
Example of a bucket brigade. Imagine each person is a pixel, the buckets contain electrons, and the end of the line is the ADC! (public domain image from PICRYL)

There’s a problem though: it turns out that the ADC, as it is gobbling up charge at the bottom of the chip, is a messy eater. There’s a large amount of noise injected into the measurement here called “read noise.” This is a huge problem for very light-poor measurements, because that read noise may be bigger than the tiny charge you want to collect! Very sensitive cameras called electron-multiplying CCDs, or EMCCDs, have a clever way of beating this read noise: They inject another array of buckets into the bucket brigade, after the array but before the ADC, called an electron-multiplying register. These new buckets are really special, because they give you GAIN: if you put a few electrons into one of these buckets then when you go to empty the bucket out into the next bucket, you’ll find you have more than you put in! Isn’t that nice!? A row of a few hundred of these electron-multiplying buckets, and suddenly the signal at the ADC is a thousand times bigger than what it was when it left the array. You’ve just made your signal so much bigger that the read noise doesn’t matter anymore! As you might imagine, this gain also adds its own noise, but for very light-poor subjects (like single molecules or distant stars), this is a very good deal. This is how super sensitive cameras for nanotechnology and visible light astronomy work.

Pictured is a large yellow device with metallic columns on each side.
Testing work on the JWST before launch (image credit Northrop Grumman)

It turns out that the camera on the JWST is not a CCD camera but a Complementary Metal Oxide Semiconductor camera (CMOS). Hmmm, that’s not a particularly informative acronym, right? It refers more to how the camera is made than how it works. It turns out, both CCD and CMOS cameras are made using a standard way of manufacturing integrated circuits. So, what’s the difference? Functionally, CMOS cameras either don’t have the bucket brigades that the CCD cameras have, or the bucket brigades only go along rows (instead of rows and columns). Remember when we said the readout machinery is bulky? In these CMOS cameras they can be made to be less bulky – enough so you can put one on each row (though still not on each pixel) without get that unwanted dead space.

But what about read noise? Turns out, read noise goes down if you can slow down each reading step. So, if we imagine our 12-megapixel iPhone array reading out each row would give you 3400 ADCs, and each image would take 3400 times longer to finish, with way less read noise. There’s no way people would want to wait 30 seconds for their iPhone to take a picture, but astronomers using a space telescope can have more flexibility! The JWST has a CMOS camera that uses a mercury cadmium telluride material (HgCdTe) that lets it detect infrared light. This is much more expensive than the silicon used for visible light detectors, but you need this other material to the detect infrared photons!

CMOS cameras (often called scientific CMOS, or sCMOS) for visible light are also becoming very popular in nanotechnology for single-molecule imaging, and there is much discussion over which type of camera is better for various applications. That’s a healthy debate to have! As scientists are using these sensitive cameras to monitor the highly variable behaviors of nanoparticles and molecules (the main actors in nanotechnology), they will encounter many different experimental conditions that might benefit from different approaches. For example, cells can emit a lot of photons when excited with light (called autofluorescence), and different molecular behaviors can happen over different timescales of behavior and with different emitted colors. Different imaging technologies may give superior behavior under different circumstances, so new imaging technologies are always in demand!

Pictured is a device with four panels on a metal plate which is connected to two metal cylinders.
The type of infrared CMOS detectors on the JWST (image by NASA and STScI)

Finally, one thing that should not be overlooked in all of this discussion of fancy cameras: human eyeballs are VERY fancy cameras! In fact, they manage to maintain spatial resolution while being sensitive to very small amounts of light – under the right conditions, the human eye can respond to a few dozen photons. 3,4 That’s not quite as sensitive as EMCCD cameras which can give you sensitivity to single photons, but it’s pretty dang close!

And so, I will end with one of my favorite verses from a poem by Kurt Vonnegut in Sun Moon Star :5

“But now, as a human infant,
It was also going to see–
and to do so imperfectly,
through two human eyes,
each a rubbery little camera.”

Kurt Vonnegut in Sun Moon Star

*Footnote: Although it’s not the topic of this blog post, it’s important when we’re talking about the JWST to acknowledge the controversy about what to name this amazing instrument. Thinking about who we choose to honor in science, and why and how those decisions get made, can be an opportunity to learn about complicated legacies and persistent inequities in science.


References

  1. Ng, J. D., et al. Single-molecule investigation of initiation dynamics of an organometallic catalyst. Journal of the American Chemical Society. 2016,138 (11): 3876-3883. DOI: 10.1021/jacs.6b00357.
  2.  Moomaw, B. Camera technologies for low light imaging: overview and relative advantages. Methods in Cell Biology . 2013, 114: 243-283. DOI: 10.1016/B978-0-12-407761-4.00011-7
  3. Hadhazy, Adam. “What are the limits of human vision?” 2015. Retrieved from https://www.bbc.com/future/article/20150727-what-are-the-limits-of-human-vision
  4. Hecht, S., Shlaer, S., and Pirenne, H. Energy, Quanta, and Vision. Journal of General Physiology. 1942, 25: 819-840. DOI: 10.1085/jgp.25.6.819
  5. Vonnegut, Kurt. Sun Moon Star. Harper & Row: New York. 1980.