The Computer as a Model Organism

Recent research in neuroscience has led to the creation of enormous datasets. Advances in neuroimaging, electrophysiology and storage technologies have ensured that every day, tons of data gushes forth from various labs around the world. This has effectively reduced neuroscience to a big data problem where all one needs to do now is to make sense of all these bits of data. Or has it ?

A recent paper by a group at UC Berkley has challenged the notion that neuroscience is all about data crunching. By devising a clever experiment, the group demonstrated that the present tools being used by data scientists to understand the brain fall short in understanding an already well understood model system – the computer.

Comparing the workings of a brain to a computer has become somewhat of a clich√© in neuroscience circles. Eric Jonas from the University of California, Berkley and Konrad Kording from Northwestern University went a step ahead, effectively asking the neuroscience community to put their money where their mouth is. The central premise of the paper titled “Could a neuroscientist understand the brain” is that if the current methodologies employed in neuroscience research are any good then they must succeed in explaining the workings of an ordinary digital computer in action.

Before we go into the paper lets have a look at some of the standard experimental techniques that have become a staple of neuroscience research.
Lesion studies: Involves selectively disrupting the activity of specific areas of the brain and studying the accompanying deficits in behaviour occuring due to the disruption.
Studying the tuning properties of individual/population of neurons: What stimuli do individual neurons code for in the brain?
Analyzing LFP oscillations: Involves drawing assertions from global dynamics as captured by the summated low frequency activity of brain regions.
Analyzing functional and effective connectivity across brain areas: Can statistical correlations between areas of the brain hold any information about its functioning?

So the processor acquires the role of the brain and the individual transistors take on the role of the neurons. Just as the electrophysiologist records from individual neurons, the authors strip the circuitry bare and record from individual transistors while the computer performs a task. The task comprises playing three 8-bit games- Donkey Kong, Space Invaders and Pitfall.

Fig 1: The “model system” is studied while it performs “behaviors” like a)Donkey Kong b)Pitfall and c)Space Invaders

The paper also raises a much deeper philosophical question. What does it really mean to understand a system? To quote the authors:

“Understanding of a particular region or part of a system would occur when one could describe so accurately the inputs, the transformation, and the outputs that one brain region could be replaced with an entirely synthetic component.”

In line with their conception of what constitutes understanding, the authors propose that any technique worth its salt must be able to unearth certain features of the processor. To quote: ‘For a processor we know pretty well what we mean with understand’. For starters we know the basic primitives(called logic gates)that go into making any kind of circuit. These gates are implemented by juxtaposing transistors in a specific order. Figure shows the physical implementation of an AND gate using transistors.

Image result for and gate transistor

Fig 2: Transistor implementation of AND gate

Source: HyperPhysics, Georgia State University

Still higher up in hierarchy is a 1-bit adder which is a bunch of logic gates put together in a manner that allows the addition of bits. The combination of these building blocks gives us abstractions like the registers and ALU. What we demand of neuroscience techniques like lesion studies, granger causality etc. is the understanding of the microprocessor that encompasses all the aforementioned hierarchies.

Image result for 1 bit adder

Fig 3: Full adder built from logic gates

Source: Wikibooks

” Note that this description in many ways ignores the functions of the individual transistors, focusing instead on circuits modules like “registers” which are composed of many transistors, much as a systems neuroscientist might focus on a cytoarchitecturally-distinct area like hipppocampus as opposed to individual neurons.”

So how do we do a lesion study on a processor?¬† We manually destroy various transistors in the processor and assess the loss in “behavior”. For this we don’t actually require a physical transistor. Instead, the entire processor and its working can be simulated by using specialized routines that are used by hardware testers to test a given circuit on a computer.

We find that certain transistors are crucial to a certain game while others are not. This observation can mislead us into thinking that there are specific transistors¬† that “encode” for this game when actually this subset of transistors are just the components of an adder involved in the game. Similar studies like tuning curve analysis, LFP analysis, connectivity analysis etc. fail to yield any insights into the modular nature of the processor. Despite being in possession of all possible electrophysiological data(3510 transistors), one still remains oblivious of adders, registers and RAMs, effectively missing the forest for the trees.

But if these tools that we have come to rely over the years fail so spectacularly at elucidating modules, what hope do we have of ever uncovering these design principles ? The authors point to research on artificial neural networks as a way to circumvent the problem of modular blindness.

“There are other computing systems that scientists are trying to reverse engineer. One particularly relevant one are artificial neural networks. A plethora of methods are being developed to ask how they work. This includes ways of letting the networks paint images and ways of plotting the optimal stimuli for various areas.”

By pointing out the possible flaws in existing methodology, the paper raises the possibility that the current tools are inadequate for understanding the hierarchical features of brains. It may be time for a radical re-think of these methods to see if they help us in unearthing the subtle design principles behind neural computation.

References:

Jonas, Eric, and Konrad Paul Kording. “Could a neuroscientist understand a microprocessor?.” PLOS Computational Biology 13, no. 1 (2017): e1005268.