New Ways of Seeing at Wistar
How Wistar’s Imaging Core has embraced the computational era to see more — and better
A contemporary of Isaac Newton, Robert Hooke, was a pioneer in the field of microscopy. Some say that Hooke coined the word “cell” based on his study of plants. Like Newton, Hooke’s body of work influenced scientific inquiry, and some aspects of his research and findings remain relevant to this day — particularly the notion that understanding the building blocks of life at their microscopic scale will lead to a better understanding of life at the human scale.
Even in the 21st century, biomedical researchers continue to rely on microscopy to observe and collect data. Wistar’s dedicated Imaging Core Facility is stocked with state-of-the-science equipment that can generate images beyond Robert Hooke’s wildest dreams, like cancer cells so clear that you can see exactly how their mitochondria are broken. The more a scientist can see, the better able they are to confirm & test their hypotheses.
Computer technology is now playing an increasingly valuable role in helping researchers to see what heretofore they could not.
James Hayden, RBP, FBCA, managing director for Wistar’s Imaging Core Facility has experienced the field move from “nothing but analog” to a new era of digitally enhanced microscopy. With the help of skilled instructors, he presided over a special workshop to get people thinking about microscopes in computational, rather than strictly photographic, terms: “They’re here to help teach us how to think computationally. We’re in a new paradigm.”
The workshop covered the how and why of using computers to adjust for something called the “point spread function” of light, which refers to the fundamental uncertainty inherent in where light “is,” spatially; as a light wave travels, it propagates its own interference pattern by bouncing off both its own wave and whatever it touches. That interference pattern creates a defined region in which, if two signals are in the region together, the signals will blur into one fuzzy shape.
In the analog days of microscopy, it used to be that cranking up light intensity or increasing the duration and/or frequency of the sample’s exposure to light might resolve blurriness on the margins. But not only are those methods unreliable at refining resolution — they can also degrade data quality and destroy precious biological samples. Many cells are susceptible to phototoxicity and will die if exposed to light for too long or at too high an intensity.
However, with computers, scientists can now use the point spread function against itself. Microscopes concentrate light to create microscopic images. Yes, the light interferes with itself, but it also interferes with the known quantities of a microscope like lens curvature, apertures, etc. This means that the limited resolution region created by the point spread function of light looks different on every microscope — and that, in turn, means computer algorithms can systematically adjust for those differences.
Imagine you have three small dots, all very close to each other, that fall within the resolution limit of a microscope’s point spread function. If you put those three small dots under a microscope, you won’t see the dots individually because they’ll all be blurred together into one indistinct shape; the image is said to be “convoluted.”
This new technology, through a specially designed algorithm, works backwards to disentangle the image’s signals. By telling the algorithm which microscope you’re using and with what settings, a series of functions subtracts away whatever interference patterns that the computer can calculate with certainty.
Ever since the earliest days of microscopy, scientists have been trying to see more of the microscopic world in order to better know how it works. As Jamie Hayden says, “The data is there, in images. It’s just a question of pulling it out of the background noise.” At the end of the workshop, Jamie compared the process of deconvolution to sifting away the silt when you pan for gold.
“By thinking computationally rather than like photographers, we can get rid of some of that noise and see what’s underneath. We’ll get clear, beautiful images, but we can also get even higher quality data — and that’s really what we’re after.”