Sat. Jan 28th, 2023

<b>NES Launch: <em>Donkey Kong</em> (1983)</b> This port of the original arcade, one of three titles launched alongside the Famicom in 1983, was far from a perfect recreation, but it still represented a quantum leap over competing versions on Atari and Coleco systems. “/><figcaption class=

NES Launch: Donkey Kong (1983) This port of the original arcade game, one of three titles launched alongside the Famicom in 1983, was far from a perfect recreation, but it still represented a quantum leap over the competing versions on Atari and Coleco systems.

In 2014, the US announced a new attempt to understand the brain. Soon we would be mapping every single connection in the brain, tracking the activity of individual neurons, and starting to piece together some of the fundamental units of biological cognition. The program was named BRAIN (for Brain Research through Advancing Innovative Neurotechnologies), and it stated that we were about to achieve these breakthroughs because both imaging and analysis hardware were finally powerful enough to produce the data we needed, and we had the software and processing power to understand it.

But this week, PLoS Computational Biology has published a warning suggesting we may be getting ahead of ourselves. Part experiment, part polemic, a computer scientist got together with a biologist to apply the latest neurobiological approaches to a system we understand far better than the brain: a processor that powers up the games Donkey Kong and Space invaders. The results were about as awkward as you’d expect, and they helped the researchers get to their bigger point: We may not understand the brain well enough to understand the brain.

On the surface, this might sound a bit ridiculous. But it touches on something fundamental to the nature of science. Science is based on having models with which predictions can be made. You can test those models and use the results to refine them. And you have to understand a system at least on some level to build those models in the first place.

To give an example, imagine trying to figure out the strange, quantum behavior of an electron if we hadn’t already done a detailed characterization of electrons, understood wave mechanics, and debated for centuries whether light is a wave or a particle wash. Basic facts and an intellectual framework must be in place before you can start building models and use them to tell you what other data you need. Are we at that point with the brain? If we could map every functional unit and connection in the brain and track their activity, would we have the tools to understand what we’ve discovered?

That’s true Donkey Kong comes in.

Games on early Atari systems were powered by the 6502 processor, also found in the Apple I and Commodore 64. The two authors of the new paper (Eric Jonas and Konrad Paul Kording) decided to use this relatively simple processor and current apply neuroscientific techniques to it, tracking its activity while loading these games. The 6502 is a good example because we can understand everything about the processor and use that to see how well the results match. And, as they put it, “most scientists have at least some behavioral level experience with these classic video game systems.”

So they built on the work of the Visual 6502 project, which got their hands on a batch of 6502s, decapped them, and mapped the circuits inside. This allowed the project to build an exact software simulator with which to test neuroscientific techniques. But it also enabled the researchers to conduct a test in the field of “connectomics,” which attempts to understand the brain by mapping all the connections of the cells within it.

To some extent, the fact that their simulator worked confirms the approach. But at the same time, the chip is incredibly simple: there is only one type of transistor, unlike the countless number of specialized cells in the brain. And the algorithms used to analyze the connections only got the team so far; it also required a lot of human intervention. “Even with the whole-brain connectome,” Jonas and Kording conclude, “it is incredibly difficult to extract hierarchical organization and understand the nature of the underlying computation.”

They then used the simulator to try out different approaches that have been used in neurobiology. The first is called a lesion analysis, where they turn off individual transistors and see what happens. While this was great for understanding which transistors were essential for which game, it really didn’t tell them much about how the processor worked at all. And in fact, the results were largely artifacts. While they were able to identify transistors that were essential to one game or the other, “a particular transistor is clearly not specialized for Donkey Kong or Space invaders.”

In other words, at least when applied to processors, the approach produced results that depended almost entirely on a particular game’s implementation.

They then turned to spike analysis. Instead of switching between on and off states, neurons transfer information through a collection of activity pulses or spikes. The authors viewed each transistor’s on-off transitions as a peak and subjected them to the same kind of analysis we would use on neurons. In testing, they were able to find correlations between the peaks of some transistors and how bright the last pixel drawn was. But guessing what the meaning might be without having a detailed understanding of the software was impossible.

(Frankly, this test wasn’t all that convincing. There really aren’t any parallels between a transistor switching state and an individual neural peak, so you wouldn’t expect the analysis to tell you anything in the first place.)

The team then analyzed activity in larger regions of the processor and showed that the averaged activities of these regions yielded data similar to what was collected from functional MRI scans of the brain. But again, much of this was just an artifact of the software implementation rather than telling us anything about the flow of information within the processor. They were also able to detect synchronized activity across regions – exactly what you’d expect in a clock-driven processor. But we also see this in the brain, where we’re not sure if they’re central to activity or just a by-product of whatever processes neurons use.

Overall, the authors generally felt that neurobiological approaches yielded data that looked interesting, but actually told them nothing. “We found that the standard data analysis techniques yield results that are surprisingly similar to those found on real brains,” they conclude. “In the case of the processor, however, we know its function and structure, and our results fell far short of what we would call a satisfactory understanding.”

On some level, this is all trivial. Brains and computers are different, so you wouldn’t expect the tools designed to understand one to work when applied to the other.

But it also shows how much work we have to do to make our models more advanced. We can fully understand transistors, processors and software because we created them. And even then, understanding what’s going on in a processor when a simple game is loaded is hard work. In contrast, there are gaps in our understanding at every level of neurobiology, from how individual neurons function, through how small groups of neurons interact, and all the way to how information flows within the brain.

Given that situation, the authors argue, it’s not clear whether all the data pouring in from the BRAIN project will help us as much as we’d like.

PLoS Computational Biology2016. DOI: 10.1371/journal.pcbi.1005268 (About DOIs)

Correction: noted the role of the Visual 6502 project.

By akfire1

Leave a Reply

Your email address will not be published.