Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?”

Eric Wong • February 3, 2017

The 6502 processor evaluated in the paper. Image from the Visual6502 project.

In a very revealing paper: Could a neuroscientist understand a microprocessor? ”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta , the Spike , Arstechnica , the Atlantic , and lots of chatter on Twitter.

This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor.

Indeed, the authors devoted more than a page of words to trying to define this question before launching into their results. Unfortunately, they do not propose any specific definition of understanding, but instead state that the data should “guide us towards” a known descriptive understanding of the workings of the microprocessor, such as the storage of information in registers, decoding and execution of instructions, etc. It is useful to realize that even in this case where we already know the answer, it is not easy to articulate a clear definition of understanding.

Some of my initial thoughts about defining ‘understanding’ in the context of brain science are outlined in my first post for this blog: “ What does it mean to understand the brain? “. Here is a little more structure that might be useful for the discussion.

Two possible approaches to defining and articulating goals related to understanding the brain.

From the perspective of the end users of the understanding, one way to categorize our goals is to declare whether they are primarily aimed at trying to satisfy our curiosity about how the brain works, whether we are reverse engineering the brain to inform computational science, or whether they are aimed at the practical goals of curing disease or augmenting the brain. The balance between these types of goals should driven by society at large. Where do we put our resources? How much do we value basic knowledge? Of course the hope is that on our way towards basic knowledge about how the brain works we find that practically useful information and technologies will fall out, as for the human genome project and the mission to the moon. However, unlike the genome and moon projects, the complexity of the brain is entirely unprecedented, and the future utility of obtaining a detailed understanding of the function of every neuron in the brain is much less certain. So, it is probably useful for now to think about basic curiosity driven exploration, reverse engineering, and healthcare driven search for biomarkers as separate goals, and frame our overarching questions accordingly.

From an analytical perspective, a clear distinction should be made between studying the substrate for computation vs studying the algorithms that run on that substrate. It is very different to understand the function of transistors and gates and neurons and synapses than to understand the algorithms that are implemented as computer programs or neural connections. Studying the substrate is primarily a bottom-up endeavor, where the biology and physiology are likely not much different between lower animals and humans. It is much less clear how to chip away at uncovering algorithms. From the bottom up, I believe we are certainly on our way to understanding real computational algorithms in very simple organisms, but scaling up is daunting to put it very mildly. Understanding the human brain in particular in an algorithmic way requires figuring out what a brain can do with 20 billion cortical neurons that it can’t do with 6 billion (chimps). Imagine the complexity of algorithms that run on 6 billion neurons with several trillion synapses. I for one, can’t. Now imagine that that level of complexity just doesn’t cut it, and we need to build an understanding of algorithms that apparently can’t be implemented without more neurons in order to understand the human brain. From the top down (as with bottom up approaches) the initial steps are well underway and clearly informative. The functional organization of the whole human brain is being mapped down to (few) millimeter scale resolution, and the richness of data at this level of many hundreds of parcels will already give us a good handle on how information is handled (in an org chart kind of way), and what is normal. From there, drilling down to something one could label as the implementation of a computational algorithm is much more dicey, and I think that what the field can really use (as for the bottom up approaches) is a clear statement of specific technical goals, and a clear description of exactly what kinds of ‘understanding’ are likely to be revealed by the attainment of those goals. Such a statement would be a great way to rally the field towards a finite set of goals. Any takers?

By Eric Wong July 20, 2022
There are many obvious things that we humans do to a much larger degree than other animals. We construct great civilizations, we create advanced technology, we use complex language, we make art and tell stories. How do our unique capabilities guide us in figuring out how our brains are different from those of other animals, if they are?
By Peter Bandettini July 4, 2022
The paper by Marek et al ( Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022 ) came out recently, and caused a bit of a stir in the field for a couple of reasons: First, the title, while an accurate description of the findings of the paper, is bold and lacking just enough qualifiers to quell immediate questions. “Does this imply that fMRI or other measures used in BWAS are lacking intrinsic sensitivity?” “Is this a general statement about all studies now and into the future?” “Is fMRI doomed to require thousands of individuals for all studies?” The answers to all these questions is “no,” as becomes clear on reading the paper.
By Peter Bandettini December 16, 2020
One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.