solutions and language. It brought researchers from academia, industry, and government together into 14 Interdisciplinary Research (IDR) teams to develop creative thought outside the confines of any individual area of expertise.
IDR teams 1A&B grappled with how to integrate images of the brain with tools like MRI, PET scans, EEG, or microscopy, which each operate on different time and length scales. Some can capture signaling molecules that are just a few hundred nanometers, others map neurons that are tens of micrometers, while still others track the electrical impulses coursing through our brains. But there is no framework for combining this grab-bag of techniques to say how signaling molecules relate to gray matter, or how an MRI scan showing shrinkage in an Alzheimer’s disease–riddled brain corresponds to the lower oxygen usage shown on a PET scan. Some members quickly realized that to integrate data from the tiny to the large, you need to perform imaging using many devices at once. They proposed doing a panel of imaging tests on animals and humans, developing models of how those images related to brain function and to each other.
Teams answering challenge 2 discussed whether it was possible to create overall metrics to evaluate an imaging system’s performance. One team determined that no metric will be useful unless it can account for, and adjust to, the person interpreting an image. They developed the idea of creating a system that was tailored to an individual reader’s biases and preferences. They also emphasized that tasks like picking out the tiny tumor in an X-ray rely on key contextual information that isn’t available in the images themselves, and that good metrics need to account for this information. For instance, radiologists use context like the patient’s history and symptoms to hone in on the areas to scan.
Researchers in team 3 aimed to detect meaningful changes between two images. Some tasks, like mapping deforestation, rely on grainy satellite images that are often altered by cloud cover, rainy days, or snow. Although there are many powerful algorithmic tools available, most researchers develop ad hoc solutions for these tasks and don’t really share their approaches with others. One group decided that a web-based tutorial inspired by the much-loved Numerical Recipes textbook could be combined with a grand challenge competition to help standardize the toolsets researchers use in image processing. Another group decided that tracking a sequence of images over time, rather than just two images, would allow them to identify more meaningful trends in the data.