Monday, October 24, 2016

Commentary on "Can a biologist fix a radio?"

I was recently assigned to write a couple of pages commenting on the famous paper "Can a biologist fix a radio?" from 2002, commenting on the importance of systems biology. I think it's highly relevant to modern neuroscience, where we seem to be encountering what Lazebnik (the author of the paper - http://www.cell.com/cancer-cell/abstract/S1535-6108(02)00133-2) calls David's Paradox: every year we have more and more newer and better findings, but we don't seem to be getting much closer to a holistic understanding of how the brain works.

This is what I had to say: 


Lazebnik’s 2002 paper leads to some very important conclusions that I believe are ultimately right, but I fundamentally disagree his arguments from the radio analogy. He seems to believe that the task facing a biologist trying to understand a cell is similar to that facing an engineer trying to understand a radio. I would claim that this is simply false.
The first obvious reason is that cells indeed are much more complex than radios. Lazebnik seems to believe this is an overstatement, claiming that biologists would also find a radio to be vastly more complex than any engineer would judge it to be. I would argue that, in general, biological phenomena pose significantly more complex scientific problems than their physical counterparts. Maybe this only seems to be the case because as of yet we don’t have many universal principles or laws to explain them. But I think this is mainly due to the fact that biology really is that complicated – even the most universal principles we find tend to have dramatic exceptions (e.g. ploidy, epigenetics, extremophiles). Physics, on the other hand, tends to have an easier time postulating principles that hold across all instances of a given phenomenon (e.g. electromagnetism). There’s a reason why we’ve been able to tackle physics questions using mathematical formulae for about 2000 years, while mathematical modeling has only arisen as a tool in biology in the past couple of decades. Because life is such an active process, the number of components interacting together leads to systems that are much harder to explain than transistors and capacitors carefully wired together to transform electromagnetic waves into sound waves.
Cell and radio science differ at a much more fundamental level, however. Even if you forget the complexity argument I put forth above, there is a sense in which the problem of understanding a cell fundamentally differs from the problem of understanding a radio: since we built them, we know exactly what radios do. But we don’t actually know what cells are for! We know that they are useful for survival – otherwise they wouldn’t have evolved. But we don’t even know if they are the optimal solution to the problem of survival (since natural selection is a satisficing, rather than optimizing, process), and we can only guess as to how it is that they provide an evolutionary advantage. Certain signal transduction pathways may seem to provide immunological benefits, and others may look like they serve purely metabolic purposes. But fundamentally we can’t actually know what a biological process is for, we can only hypothesize. An engineer approaching a radio, on the other hand, knows exactly what a radio is for. More than that, she also knows what capacitors and transistors are for.
Note how crucial this information is to repairing or understanding an object of study. The engineer can approach the open radio and think about what processes are necessary to transform electromagnetic waves into sound waves. She can make some hypotheses about what components are necessary to implement these processes and then look for them in the machine. She can also interpret the consequences of removing certain components, thus aiding her understanding of what each component does. But a biologist can do none of this. When a biologist removes a component from the system, they may have no idea what has gone wrong. This simple fact enables Lazebnik’s cartoon: it is hard for a biologist to infer anything deeper than whether the machine still works, leading to such simple classifications as most important/really important/undoubtedly most important components. A more nuanced understanding of the deficiencies of the system is impossible without knowing what the system is actually supposed to do.
Before going on to describe how I would approach the radio problem, I pause to note how this is particularly true for brain science. We don’t really have any idea of what the brain’s components are actually for. Sensory neuroscientists seem to pretend to know what sensory systems are for, but we don’t actually even know that. Is the visual system designed to minimize the error between percepts and the real world? (Hoffman, 2009) Is it designed to learn a generative model of the statistics observed in the natural world? (Schwartz et al., 2007) Is it designed to constantly predict what will appear in the visual scene? (Friston, 2009) This idea in fact stands at odds with one of the central tenets of cognitive science: Marr’s three levels of analysis. David Marr contended that to understand the brain we must start by specifying what it is meant to be doing (the “computational level”). But, unlike a radio or a cash register (Marr’s own analogy; Marr, 1980), we can’t know what the brain does. Presumably, all we really know for sure is that it evolved through natural selection, so must be useful for survival. But we can’t interrogate how or why natural selection did what it did, only speculate.
It is for this reason that repairing or understanding a cell is inherently a different problem from repairing a radio. If I were to repair a radio, I would take as a starting point its purpose. Then, I would approach the problem of transforming electromagnetic waves into sound waves from first principles, trying to postulate properties that the radio must have to be able to do this. Then I would open it up and see if I can find anything that endows the radio with these properties. If, through my exploratory analyses, I were to find some other principles at play inside, I would try to see how these fit into a solution to the overarching problem of transforming radio waves to sound waves. Note that this approach relies on the fact that I know what this overarching problem is. It allows me to interpret what I find in the radio and to execute my dissection in a guided and principled manner.
Biologists don’t have this luxury. We need to uncover the overarching principles from the ground up. However, this does not mean that our dissection should comprise the core of our investigation. On the contrary, it is crucial that we maintain an active theoretical examination to allow experimental findings to build on each other so we can eventually reach the universal principles and answer the questions about functional significance (which lead to practical repairs). This is where I coincide with Lazebnik. There is no way the experimental findings can build on each other without precisely formulated theories. As he aptly articulates in the article, such theories are only possible with a universal formal language such explicit wiring diagrams, or mathematics.

REFERENCES

Friston, K. (2009). The free-energy principle: a rough guide to the brain?. Trends in cognitive sciences13(7), 293-301.

D. Hoffman. The interface theory of perception: Natural selection drives true perception to swift extinction. In Object categorization: Computer and human vision perspectives, S. Dickinson, M. Tarr, A. Leonardis, B. Schiele (Eds.) Cambridge, UK: Cambridge University Press, 2009, 148–165

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information.

Schwartz, O., Hsu, A., & Dayan, P. (2007). Space and time in visual context.Nature Reviews Neuroscience8(7), 522-535.


Saturday, October 15, 2016

First post!

To my many many readers:
I guess I should start by introducing myself and motivating what this blog is supposed to be. I'm a grad student in London, just starting my PhD in computational neuroscience. A computational neuroscientist hopeful, I spend a lot of time thinking about the brain and even more about how to do so. It is not hard to see that we are currently sitting right in the golden age of neuroscience: the past decade has seen a non-stop stream of mind-blowing innovations in tools for looking at and measuring the brain, and multi-billion dollar brain science funding efforts are popping up at the national level. But, along with this insane acceleration in methodology we seem to have seen, if anything, a deceleration in ideas for how to understand how the brain works.
I stress now, with all honesty, that this is what I care about: how the brain works. My curiosity could care less about brain disease and treatment - these things are (the most) important, of course - but the questions and ideas bubbling in my brain don't tend to touch on any of these things at all. I wonder how this piece of meat could be so powerful. I wonder how its microscopic building blocks interact to produce its macroscopic behaviors. I wonder how computers are so much better than it at putting two and two together, yet absolutely helpless at seeing and learning about the world. This piece of meat can learn an entire language perfectly by just crying, eating, and sleeping for two years. It doesn't even have to try.
I bring up the comparison with computers particularly because computation is at the heart of modern neuroscience. One of the few fundamental ideas that everyone seems to agree on is that, like a kidney is built for filtering out toxic waste in the blood, the brain is an organ built for computing. As an undergrad, one of my most memorable lectures that was pivotal in putting me on my current path was about thinking of the brain as a Turing machine: just like this laptop works on the basis of binary switches, the brain might work on the basis of binary all-or-nothing action potentials. But when you really start unpacking the way computers work and the way the brain works, you quickly begin to see the problem is far bigger. We have no idea how brains compute and hence no idea how far, if at all, the analogy with digital computers goes.
So, since neuroscience seems to be so clueless about how the brain works (except for the fact that it computes - wait, what does that mean?), I have decided to do a great service to society and lend a helping hand. I study computational neuroscience at the Gatsby Computational Neuroscience Unit, working on neural computation with Peter Latham and Adam Kampff. For now, I am working from a theoretical perspective, meaning that I think a lot about how neurons might compute without actually watching them compute. This approach has its benefits and its pitfalls. So what's really important to me these days is trying to figure out exactly what role I want to play in neuroscience, if any. The theory side has really caught my fancy, as they say here, but it is really hard to see how far it can take us.
The idea behind starting this blog is as a way for me to reflect on these things. I always find that writing things down takes any ideas in my head orders of magnitude further, so hopefully this will do the same. I think one of the best ways to figure out exactly what kind of science you want to do is to try to imagine your ideal scientific result. Imagining yourself in that legendary eureka breakthrough moment that leads to a Nobel prize and 10 nature papers. What does that result look like? What kind of question is it answering? If this blog leads me to coming up with a concrete answer to this question then it will have done its job.
One thought I had today: I mentioned above that what my curiosity cares about is how the brain works. But maybe that's not the heart of the matter. The reason understanding how the brain works is so interesting is because of its awesome computational power. The inferences it makes on a milllisecond by millisecond basis, so effortlessly and with such messy biological machinery, are otherworldly. We see this especially through mathematics and computer science. Try solving the problem of vision and you will quickly see how impossibly hard it is. Yet we humans do it literally without trying. So maybe the more important question is not how, but why the brain is so good at what it does. The Turing machine has provided us exactly the language to be able to formally think about the problems the brain solves so easily, and we have been able to exploit it to build incredible machines that solve a lot of these problems. But it is striking how much better digital computers are at some things and how much better the brain is at others. Why? Answering this question would answer some of the biggest questions about the brain, and provide the breakthrough AI has been waiting for for so long. Can we answer it by investigating complex interactions between integrate-and-fire neruons? Or are there some more intricate biological principles at play here? Maybe the answer is something akin to deep learning?
As you (I?) can see, mathematics and computer science are at the heart of how I think about the brain. But by putting biology somewhat aside, am I missing out on all the best clues? Maybe the core principles behind the brain's computational power are deep biological properties (molecular? cellular? dendritic? genetic?).
To my now much fewer readers: hopefully not all my posts will be this long.