This image could It may hang in a gallery, but it was originally a small piece of a woman’s brain. In 2014, a woman undergoing epilepsy surgery had a tiny piece of her cerebral cortex removed. This cubic millimetre of tissue has allowed researchers at Harvard and Google to produce the most detailed wiring diagram of the human brain the world has ever seen.
Biologists and machine learning experts spent 10 years building a Interactive map of brain tissue, which contains about 57,000 cells and 150 million synapses. It shows cells that wrap around themselves, pairs of cells that appear mirrored, and egg-shaped “objects” that research suggests defy categorization. The incredibly complex diagram is expected to help drive scientific research, from understanding human neural circuits to potential treatments for disorders.
“If we map things at a very high resolution, see all the connections between different neurons and analyse them on a large scale, we can identify connection rules,” says Daniel Berger, one of the project’s principal investigators and a specialist in connectomics, the science that studies how individual neurons connect to form functional networks. “From this, we can build models that mechanically explain how thought works or how memory is stored.”
Jeff Lichtman, a professor of molecular and cellular biology at Harvard, explains that researchers in his lab, led by Alex Shapson-Coe, created the brain map by taking subcellular photographs of the tissue using electron microscopy. The 45-year-old woman’s brain tissue was stained with heavy metals, which bind to the lipid membranes of cells. This was done so that the cells would be visible when viewed through an electron microscope, since heavy metals reflect electrons.
The tissue was then embedded in resin so it could be cut into very thin slices, just 34 nanometers thick (by comparison, the thickness of a typical sheet of paper is about 100,000 nanometers). This was done to make mapping easier, Berger says, to transform a 3D problem into a 2D problem. After this, the team took electron microscope images of each 2D slice, which amounted to a whopping 1.4 petabytes of data.
Once the Harvard researchers had these images, they did what many of us do when faced with a problem: They turned to Google. A team at the tech giant led by Viren Jain aligned the 2D images using machine learning algorithms to produce 3D reconstructions with automatic segmentation, which is where components within an image (e.g., different types of cells) are automatically differentiated and categorized. Some of the segmentation required what Lichtman called “ground truth data,” which involved Berger (who worked closely with the Google team) manually redrawing some of the tissue to better inform the algorithms.
Digital technology, Berger explains, allowed him to see all the cells in this tissue sample and color them differently depending on their size. Traditional methods of imaging neurons, such as staining samples with a chemical known as Golgi stain, which has been used for more than a century, leave some elements of nervous tissue hidden.