Saturday, October 15, 2016

First post!

To my many many readers:
I guess I should start by introducing myself and motivating what this blog is supposed to be. I'm a grad student in London, just starting my PhD in computational neuroscience. A computational neuroscientist hopeful, I spend a lot of time thinking about the brain and even more about how to do so. It is not hard to see that we are currently sitting right in the golden age of neuroscience: the past decade has seen a non-stop stream of mind-blowing innovations in tools for looking at and measuring the brain, and multi-billion dollar brain science funding efforts are popping up at the national level. But, along with this insane acceleration in methodology we seem to have seen, if anything, a deceleration in ideas for how to understand how the brain works.
I stress now, with all honesty, that this is what I care about: how the brain works. My curiosity could care less about brain disease and treatment - these things are (the most) important, of course - but the questions and ideas bubbling in my brain don't tend to touch on any of these things at all. I wonder how this piece of meat could be so powerful. I wonder how its microscopic building blocks interact to produce its macroscopic behaviors. I wonder how computers are so much better than it at putting two and two together, yet absolutely helpless at seeing and learning about the world. This piece of meat can learn an entire language perfectly by just crying, eating, and sleeping for two years. It doesn't even have to try.
I bring up the comparison with computers particularly because computation is at the heart of modern neuroscience. One of the few fundamental ideas that everyone seems to agree on is that, like a kidney is built for filtering out toxic waste in the blood, the brain is an organ built for computing. As an undergrad, one of my most memorable lectures that was pivotal in putting me on my current path was about thinking of the brain as a Turing machine: just like this laptop works on the basis of binary switches, the brain might work on the basis of binary all-or-nothing action potentials. But when you really start unpacking the way computers work and the way the brain works, you quickly begin to see the problem is far bigger. We have no idea how brains compute and hence no idea how far, if at all, the analogy with digital computers goes.
So, since neuroscience seems to be so clueless about how the brain works (except for the fact that it computes - wait, what does that mean?), I have decided to do a great service to society and lend a helping hand. I study computational neuroscience at the Gatsby Computational Neuroscience Unit, working on neural computation with Peter Latham and Adam Kampff. For now, I am working from a theoretical perspective, meaning that I think a lot about how neurons might compute without actually watching them compute. This approach has its benefits and its pitfalls. So what's really important to me these days is trying to figure out exactly what role I want to play in neuroscience, if any. The theory side has really caught my fancy, as they say here, but it is really hard to see how far it can take us.
The idea behind starting this blog is as a way for me to reflect on these things. I always find that writing things down takes any ideas in my head orders of magnitude further, so hopefully this will do the same. I think one of the best ways to figure out exactly what kind of science you want to do is to try to imagine your ideal scientific result. Imagining yourself in that legendary eureka breakthrough moment that leads to a Nobel prize and 10 nature papers. What does that result look like? What kind of question is it answering? If this blog leads me to coming up with a concrete answer to this question then it will have done its job.
One thought I had today: I mentioned above that what my curiosity cares about is how the brain works. But maybe that's not the heart of the matter. The reason understanding how the brain works is so interesting is because of its awesome computational power. The inferences it makes on a milllisecond by millisecond basis, so effortlessly and with such messy biological machinery, are otherworldly. We see this especially through mathematics and computer science. Try solving the problem of vision and you will quickly see how impossibly hard it is. Yet we humans do it literally without trying. So maybe the more important question is not how, but why the brain is so good at what it does. The Turing machine has provided us exactly the language to be able to formally think about the problems the brain solves so easily, and we have been able to exploit it to build incredible machines that solve a lot of these problems. But it is striking how much better digital computers are at some things and how much better the brain is at others. Why? Answering this question would answer some of the biggest questions about the brain, and provide the breakthrough AI has been waiting for for so long. Can we answer it by investigating complex interactions between integrate-and-fire neruons? Or are there some more intricate biological principles at play here? Maybe the answer is something akin to deep learning?
As you (I?) can see, mathematics and computer science are at the heart of how I think about the brain. But by putting biology somewhat aside, am I missing out on all the best clues? Maybe the core principles behind the brain's computational power are deep biological properties (molecular? cellular? dendritic? genetic?).
To my now much fewer readers: hopefully not all my posts will be this long.

No comments:

Post a Comment