Monday, July 10, 2017

Taking yesterday's post further, the question quickly becomes: what the hell is motor preparatory activity for? Note that when I say preparatory activity I'm talking about e.g. delayed reaching tasks and other experiments where they record from the dorsal premotor cortex (PMd), in primates. What about rodents?

Maybe this question is even deeper than it sounds. Rodents are a real mystery because it turns out you can completely wipe out a rat's motor cortex and it continues moving and living normally, with one caveat: if you introduce an unexpected perturbation into the environment they are used to (i.e. some behavioral assay that they were trained on prior to the motor cortex removal), they don't seem to know what to do about it. That might be a bit of an overinterpretation of behavior from my part, but the point is the only kinds of situations where de-motor-corticated rats behave differently from rats with a motor cortex is when such unexpected perturbations are introduced. And, funnily enough, after the first time they are introduced to the perturbation they quickly adapt and deal with it just like normal rats. This result is at the crux of Adamp Kampff's work trying to figure out what the hell cortex is really for. Their working hypothesis is that it is a brain structure evolved to produce robust behaviors - behaviors that are resistant to all kinds of unexpected situations, absolutely vital for survival (e.g. https://www.youtube.com/watch?v=u73hRPH4RQs).

Zooming back in to rodents vs primates, the picture looks somewhat like this: whereas primate motor cortex directly controls muscles, rodent motor cortex seems to provide highly specialized input to subcortical structures that directly control muscles. I probably need to read up more on what exact anatomical connections exist, but clearly muscle control can be performed by rodents without their motor cortex (through spinal reflex loops + subcortical input). What is the analog of this in primates? Could it be PMd? Usually lives in the nullspace, then jumps into potent space when absolutely needed. Delayed reaching task doesn't seem like that kind of unexpected situation where this kind of processing would be needed, but then again rat motor cortex presumably is not silent during usual run-of-the-mill motor activity.

Here's an idea for nullspace computation: the motor cortex is constantly processing and updating its model of the environment (w.r.t. what motor movements are useful to make - e.g. there is a stable wall to my left and an unstable wall to my right so make sure to hold on to the left in case of an earthquake), ready to exploit this information when need be by jumping into the potent space. There's a rodent and primate experiment there. Could PMd be doing something like this? After all, cuing a reaching target is an update to the kind of information about the world needed to behave correctly in the task. As is applying a force field during reaching.
Back to computation in the nullspace. How is it useful? Easy to see how its useful for preparatory motor activity: the null-potent space distinction allows for gating the downstream effect of neural activity, so that it can act in a preparatory manner. In other words, by living in the nullspace, preparatory activity is safe to plan the next movement without interfering with the current one.

But lets break this picture down a little further. What is a "movement"? Where is the distinction between the current movement and the next one currently being planned? Does the motor system really work this way? The simplest way to imagine what motor systems do is take some vector representing what you want achieved and then calculating and spitting out what set of muscle commands will achieve that. Where does preparatory come into play there? The idea in the literature seems to be that motor cortex is a "dynamical system", with the property that it behaves quite differently when put in different initial conditions. The preparatory activity's job is thus to choose the right initial conditions. But again - what does "initial" mean? In papers, the initiation of the movement is given to you on a platter with the delayed reaching task "Go" cue. But in the real world there are no Go cues. And there is no delayed period between target onset and go cue (except in really contrived situations, e.g ready,set,go!). So what does preparatory activity do then? It's a bit of a mystery to me. I should probably read more, but it sounds like a classic case of abstracting principles of neural activity from a highly contrived and artificial experimental task, i.e. modern calcium imaging era neuroscience.

One possibility is the existence of primitive motor movements. In this case, there are very distinct units called "movements", and each one can be produced by simply initializing the motor system in the right way and then letting it run (presumably with feedback). I'm pretty sure the idea of motor primitives has been around for a while, but I should look more into it. The idea is quite appealing from a learning perspective too, whereby simple tasks can be easily accomplished by a sequence of motor primitives (i.e. a sequence of different initial conditions) but harder tasks require learning new motor primitives, which might require some rewiring of e.g. motor cortex.

I'm actually currently working on this problem right now - although avoiding the issue of learning for now and just trying to see if we can hardwire this into a network. What I'm finding right now, in very preliminary stages (just trying to do this with a linear dynamical system - which could be harder to do than in a nonlinear network, but linear systems are easy to work with analytically), is that it's really quite hard to design a dynamical system with the desired properties. You want a dynamical system that (1) produces highly (meaningfully) distinct trajectories when starting at distinct initial conditions but (2) is robust to small perturbations of its initial condition. The brain is really noisy, so (2) is just as important as (1). It is worth noting that - at least in rat barrel cortex - cortical dynamics look to be pretty chaotic, i.e. condition (2) doesn't hold. But one could easily imagine motor cortex is wired up in a certain way for (2) to hold. But what I'm finding is that making (1) hold is pretty hard.

One caveat I that came to mind: when we speak about the initial conditions of the system, I think it's important to note that the system is a closed loop feedback system. One could imagine the feedback makes it easier to make sure (2) holds, while the wiring makes (1) hold.

Friday, July 7, 2017

Not much on my mind this morning. Spent a long time this weekend helping my girlfriend with her masters dissertation research, delving into marketing and psychology journals. Pretty appalled by the quality of papers in that field. Often seem to do all the right analyses and statistical tests and I think I even saw corrections for multiple comparisons at times. But then they don't include a single plot! Bizarre. Am I seriously supposed to go through and actually read the results section? Read the numbers and the p values and test statistics? Ridiculous. To each their own I guess. The actually bad part was mostly the methods. Often terribly written and badly done. Brings back memories from my undergrad in reading really bad psychology papers - the marketing stuff seems to fall right in that corner of the field that gives psychology such a bad rep. i.e. I wonder how much of this stuff is replicable

We're currently redesigning the systems and theoretical neuroscience course taught at the Gatsby. It's a course designed for both students from the Gatsby (from maths/physics/computer science backgrounds) and students from the SWC (from biology backgrounds) to take together. The structure says it all: two lectures per week, one in theory and one in biology. How do we do this well? To start thinking about this I lined up all the topics one would want to cover in a "foundations of theoretical neuroscience" class and then thought about what systems/biology topics fit along side, e.g. coding - sensory systems, optimal control - motor systems, networks - ?, ... It's not so easy. But the hardest part is designing the biology lectures in a certain way so that they compliment the theory (I am starting to sound quite theory-biased, not sure if that's a good thing :s). I absolutely hate classic "intro to visual system" lectures where they go through the classical picture of the visual system that you get from a textbook without really delving into detail. But maybe that's necessary to be able to go any further? I'm not sure. I think the key thing is to take the Marrian approach and always start from "what is the problem this system is trying to solve?" and then "What would you expect a system built to solve this look like?" to "What does it actually look like?" but now with an emphasis on the computational problem. But this obviously gives a highly incomplete picture, since there are many many important things observed experimentally for which we have no idea what they are for. Can't leave those out: this is the "bottom-up" side of theory, whereby we try to come up with a theory to explain an observation (as opposed to "top-down" theory where you specify a computation and try to come up with a theory for how a brain-like thing could do it, e.g. supervised learning -> backprop -> Tim Lillicrap's research). You could just tack these on to the end. I don't think this would be the worst idea in the world. Once you've already set up our investigation of the visual system as looking for ways in which the brain solves some problem, that already gives you some perspective and a framework within which to think about what these new puzzling observations mean. Another task is convincing someone to build their lecture this way - it's a lot more work than your typical intro to ___ system. Also, this is a very top-down (maybe theory-centric) way of thinking about how to teach neuroscience. I think it's the right way, but does everyone else?

The reason this just popped into my head is because one area that I realized was totally underrepresented was cognitive psychology. I think the classic macrostructure in systems courses is (sensory systems)-(motor systems)-(cognitive and learning systems). Indeed, this is mainly how we have divided it. Shouldn't cognitive psychology have a place in that last section? It's not entirely clear, which I think is very sad. Usually that last section consists of reinforcement learning, conditioning, memory and decision making (in the sense of e.g. random dot stereograms), with their biological counterparts in reward systems, neuromodulators, hippocampus, LIP stuff. What about language? What about reasoning? There is loads of research out there on these higher level truly "cognitive" phenomena. But unfortunately we have no way of mapping these to biology. In my opinion none of that research is anywhere close to, mainly because they are phenomena unique to human beings (in some sense making them the most important to study) so we can't do calcium imaging or ephys - just fMRI or EEG from time to time. That said, there are a lot of classic results and interesting patterns in the data. Just because we can't relate it to brains, it doesn't necessarily mean we shouldn't include it in a systems course. Or does it? There is something to be said here about what systems neuroscience students should know. But I think there is also something deeper to be said about the direction of such research. How far can we take such investigations without grounding them in biology? There is a reason psychology is one of the fields suffering the most from the replication crisis...

Wednesday, July 5, 2017

Coming back to yesterday's post - does the muscle activation --> limb movement mapping really change in regular life? I mentioned it because it certainly does in these classic force field reaching experiments - but when do you ever encounter a force field in real life? By and large I think this mapping stays relatively constant. Except maybe when you are lifting weights. I haven't been able to think of another situation where it doesn't.

So my current outlook is that the motor system has hardwired into it the forward mapping from muscle activation to limb movement. And the reversed mapping as well, possibly (likely, but maybe not necessary?). It's key job then is to figure out what limb movements to make. Given some goal, infer what sequence of movements is needed to achieve it. As I mentioned in the previous post, this inference will depend on a lot of factors about your environment. A given limb movement will have very different consequences in different environments. More on this another time.

Recently I've been thinking about computation in the nullspace. There is this big idea going around in the motor cortex literature since around 2010ish sprung by the excellent work of Krishna Shenoy and Mark Churchland. The idea is that we should think about motor cortex as a dynamical system, whereby different initial conditions lead to different trajectories in phase space that translate to different limb movements. The prediction then being that in a delayed reaching task, the preparatory activity (in dorsal premotor cortex) that occurs during the delay period (a fixed length time interval between target onset and reaching movement onset at a go cue) serves to put the system in the right initial conditions to generate the appropriate reach. But this raises a suddenly obvious question: how does this dorsal premotor cortex preparatory activity not generate movements? Their answer: it lives in the nullspace of motor cortex activity, meaning that it doesn't affect motor cortex activity. E.g. if a motor cortical neuron has 2 presynaptic inputs with weights +1 and -1, all activity patterns in the nullspace are such that these two neurons are equally active (so the post-synaptic neuron is silent). Or something along these lines - we don't really have a mechanistic circuit model to explain these phenomena yet....

Could nullspace activity be useful for other tasks computations? One recent study up on bioarxiv by Juan Gallego and colleages shows that in a force field task exactly like that I described in the previous post, the only difference in preparatory activity between early (not learned, so bad reaching performance) and late (learned, with stable good reaching) force field trials within a session was activity in the nullspace. Makes sense from the above point of view of using the nullspace to set up the right initial conditions - presumably you need different initial conditions when there is a force field. But nullspace activity is happening all the time, concurrent with potent space activity (the opposite of the nullspace - activity that does affect downstream areas). How can we use this for efficient computation? What are other situations where it could be useful?

Monday, July 3, 2017

This is my first stream of consciousness writing post. I've decided to try to do this daily, or at least weekdaily, in the morning to just have a place to put ideas down on paper. I've been thinking a lot about motor control and motor learning. What are the kinds of mechanisms the brain needs to do this? Seems obvious that control theory is relevant here - should probably delve into some of that literature. There is so much neuroscience literature to look at first though, it's a bit overwhelming. The big question that seems to me to remain totally unanswered is where the learning takes place. Not where in the brain but where in the computational graph so to speak. To be able to generate the right motor commands to achieve a given goal requires you to 1. know the mapping from goal to [limb] movement, and 2. the mapping from neural activity to [limb] movement. And in fact the second mapping contains two mappings in it: 2a. from neural activity to muscle contraction, and 2b. from muscle contraction to limb movement. The hard part I guess is that mappings 1 is not unique - it is not injective (surjective? I always forget...). Mapping 2b is definitely unique, but in fact it turns out under certain conditions 2a sometimes is not unique (cf. some Science paper from early 2000's).

Mapping 1 is not unique but it is obviously constrained, particularly for easy problems. You wouldn't swing your arm around your head and throw it out in front of you just to reach for a coffee mug right in front of you - that's a massive waste of energy (and possible risk of injury). So the space of muscle movements that map on to a given motor goal are certainly a subspace highly constrained already by a reasonably obvious set of cost considerations (this solves Bernstein's famous problem if you consider that your costs are only specific to the task-relevant errors - Todorov & Jordan 2002). I think mapping 1 is also highly variable, depending on the environment. If you want to reach for an object on an elevated surface, you will probably use totally different movements if the surface you are standing on is perfectly stable or out of balance.

Let me talk about one other situation: the highly contrived experimental paradigm of making reaching within a force field. This paradigm has been studied over and over again by people like Emilio Bizzi and others in the past decade as a way of investigating motor learning, or motor "adaptation" (inevitably will be more on this distinction on another post). Now let's put it in our picture of mappings 1-2a,b. The force field leads to a change in the movement produced by a given muscle activation pattern. Well shit this is a change in mapping 2b - the only one we said was unique. So now we have a twofold problem: we need (to learn) a context-specific (non-unique) mapping 1 and (presumably unique) mapping 2b.

Do we lump them together? Which of these (or both) correspond to the famous forward models allegedly found in the cerebellum etc.? Will have to get back to you on this..

My mind keeps going to model selection when I think about this context-specific mapping idea. Are we capable of learning these models on the fly? Or do we store a learned set of them that we can switch between? If the latter, we need a method for selection. This is a famously hard problem in statistics, although maybe for reasons that are not applicable here (i.e. higher complexity = higher likelihood). More on this another time!