
For a long time, imaging was probably the most boring subject imaginable. Unless you were excited about comparing different mass-produced brand name lenses, there wasn’t much to talk about. That changed briefly with the invention of the laser, but the actual imaging technology was still…yes, boring.
However, in the last decade, things have really changed, partly due to new ways of thinking about what an image actually is. One of the many fascinating variations on traditional imaging is something called ghost imaging. The idea of ghost imaging was to use the quantum nature of light to image an object by detecting photons that had never encountered the object before. This is an astonishing idea that has now been developed in such a way that it can even be practical in some circumstances, especially when you can get about 1,000 ghost images per second.
Am I seeing ghosts or am I using ghosts to see?
The original idea behind ghost imaging used something called quantum entanglement. Suppose I have a single photon that I cut into two photons. Since the universe does not create or destroy things like energy, momentum, or angular momentum, the energy in the two photons must add up to the value of the energy contained in the first photon.
However, the total energy can be distributed at will. If this were classical physics, that would be the end of the story: two photons, each with an energy whose total is a fixed value. However, in quantum mechanics, we cannot know which photon has which energy. The result is that both photons behave as if they have all possible energies at the same time. The same goes for momentum and angular momentum.
The two are also entangled, which means that if I measure the energy of one photon, I get a single number and then the second photon immediately assumes the correct energy. From then on, it behaves like a single energy photon. That’s what makes quantum entanglement special.
We can use two photons with this kind of strongly correlated properties to create images. One photon goes straight to a camera, while the other bounces off the object. The photon that bounced off the object can then be recorded using a photo detector. The researcher then does the following: every time the camera picks up a photon (remember, these aren’t the photons hitting the object) and the photo detector goes off, you keep the camera image. The remaining camera images are discarded. The saved images add up to create a complete image of the object, all based on light that never came near the object.
You might think this is a rather slow process, and you might be right. Imagine our entangled photon source emitting about a million photons per second (this would make an excellent entangled photon source). Of the photons sent to the object, about one percent of it bounces back (the rest is lost); of that one percent, maybe one in a thousand bounces back on a path that sends it to the photo detector. So we get about 10 camera images per second, each of which is a single photon detected by a single pixel of the camera’s sensor. If the camera has a million pixels, we expect to need about 30 hours to collect enough data to combine into a single image.
That kind of sucks.
What’s in a name?
Later, researchers realized that you didn’t really need this kind of imaging with single photons. The next idea is a bit abstract, but it is central to the work. Photons always come into something called a mode. In this case, a mode only describes the spatial shape of the light – where the bright and dark spots are. Each image can be described as a sum of modes.
What does this mean? Instead of emitting pairs of photons, you can use an intense light source. That light should be in a single spatial mode, which is divided so that it travels along two paths. In one path, the mode is directly detected by a photo detector. In the second path, the light bounces off the object and then another photodetector records how bright the reflected mode is — requiring only a single pixel.
A computer can then use the two signals to determine the contribution of that mode to the image. To create an image, browse through as many modes as you want and list their contributions. Frankly, I don’t think this is really ghost image, because you already know the mode (because you control the light source), so you don’t need the detector that measures it.
That’s why the researchers removed that detector and called the technique computational ghost imaging. The researchers take their knowledge of the mode sent by the light source and then use the intensity of the single-pixel photodetector to determine how much that mode contributes to the signal.
I still don’t think you can call this ghost imagery, no matter how many adjectives you put on it. The image is created directly from the photons reflected off the object, plus a calculation based on the spatial mode of the light hitting the sample. But whatever you call it, it’s pretty cool.
Bright flashing lights
The advantage of using modes is that each mode can be very bright. That means you don’t have to wait long as each photon bounces off the object. However, you still have to cycle through many modes individually to build up the image. While this slows things down, it’s still a huge improvement, reaching speeds of up to around 10 frames per second (fps).
The delay is because each mode has to be created individually, which is usually done by using something like the mirror used in a projector. The projector mirror can create about 22,000 modes per second, while an image of 1024 pixels needs about 2,048 modes to ensure accuracy.
To get to 1000 fps, the researchers left behind the mirror of a projector system and decided to simply use an array of 1024 lights (LEDs). Each LED can be switched in a few nanoseconds, yielding a potentially much higher frame rate. The light grid was controlled with a custom controller capable of producing 500,000 modes per second, giving the researchers a base frame rate of 250 fps.
But once you know something about the object you’re imaging, you can figure out which modes are important and which are not. The researchers implement this using an evolutionary algorithm that uses the modes that were most dominant in the previous image and adds a random sample of other modes to quickly converge to an image. This allowed them to reduce the number of modes for a 1,024-pixel image from 2,048 to 512, increasing the frame rate to an impressive 1,000 fps.
In static images, of course, this is not very impressive. So the researchers also captured moving scenes. There, at 1,000 fps, the camera significantly outperformed the slower frame rate settings (as expected).
The researchers also did quite a mock comparison with a normal camera. It’s a bad comparison because the normal camera couldn’t work at 1,000 fps and, at its normal frame rate (50 fps), couldn’t work with a shutter speed equal to 1,000 fps. So the images it has obtained are naturally good and really blurry.
But that doesn’t affect the overall results. Yes, there are cameras with higher frame rates and cameras with a higher resolution. However, this kind of imaging system could achieve higher frame rates. And it is particularly suited to certain types of microscopy that currently have quite low frame rates and would benefit from this type of technique. So yes, this is the kind of imaging system that will find its place in the pantheon of cameras – even if it’s not ghost imaging anymore.
Optics Express2018, DOI: 10.1364/OE.26.002427