Home » Publications » Articles

Category Archives: Articles

Computing using space in dendrites

After two years of work my last paper is finally out 17_05NECO. What I keep realizing is that science can take an awful long time. New idea need time to mature and even more time to be accepted. This paper was an uphill battle against editorial and peer reviewers. But it is finally out and a good excuse to write a new post.

This paper demonstrate an important thing. Dendrites can play a crucial role even for the most simple computation. The computation studied here is stimulus selectivity in other words a simple passing of information to signal the presence or absence of a particular input. Even in this case dendrites enables to make this computation more resilient to synaptic and dendritic failure. The implementation proposed here may seem too complex for a simple computation: to make the preferred inputs the most dispersed rather than the strongest. But it shows that with dendrites you can do more with less. Be more resilient with less synapses.

Another coming work will show that for the same computation an implementation employing dendrites use less synapses than a classic one. This becomes particularly interesting when one see that saturating dendrites can be implement using a resistor (that will saturate in all cases).

Publication list

I finally updated my publication list PubList. This is a satisfying thing to do, and it also makes me think about the way I walked and the way which remained. This way is well summarized in the over-ambitious title of this blog. Looking at my publication list I guess I am trying to come from both end of the spectrum with a preference for the single neuron side.

EITN, Cosyne, and a submitted article

I presented this work for the first time in Paris. I undertook it in collaboration with Dr Claudia Clopath and Dr Simon Schultz and it was presented at a workshop on dendritic computation at the new EITN in Paris.

In this work we present a neuron model that can display both synaptic clustering and scattering. Simultaneously active synapses cluster if they are co-localized on the same dendritic branch (in a ~10-20 microns radius on dendrites). We say that active synapses scatter when they are located on different dendrites. Both observations are possible in the same type of neuron, e.g. a pyramidal neuron from the upper layer (II/III) of the cortex. In this work we conciliate the two sets of observations. In a sentence, we demonstrate, with a model, that clustered synapses could be useful during learning while scattered synapses are useful during sensing.  This goes well with the fact that scattered synapses are observed more during a sensory-evoked episode.

This work is important to me because I have shown during my PhD that dendrites enables to compute amazing things (linearly non-separable computations). In other words, I showed that a plane (neuron) can fly high. Here I am showing how this plane can gain altitude (plasticity). With this work I am showing how a neuron can learn to do these amazing things and also I am explaining some strange pieces of experimental data, killing two birds with one stone.

I recently submitted this work to the Cosyne conference where it was accepted (the extended abstract is here 14_11CosyneA). I am looking forward to present this work in Salt Lake City!

I also submitted this work to a scientific journal! Fingers crossed for it, I hope it will go through to the review process at least.

My postdoc first productions: SFN2013 abstract and IEEE2013 article

This work is a smooth transition between being a doctoral candidate and being a post doctoral fellow. This abstract (SFN2013) and this article (IEEE2013) both deal with how well a threshold linear unit (TLU) is capable of approximating the feature binding problem (FBP) -of two objects made of two independent features -. One might wonder why we wanted to answer this question; I chose it because it is trivial to compute the FBP when the neuron has a non-linear dendritic subunit. We already know from previous work that it is impossible for a TLU to perfectly solve this problem, but not if it can approximate the FBP. This is why, I think, it is interesting to wonder if a neuron without a non-linear dendritic subunit, e.g. a TLU, is capable of approximating the solution.

The answer to this problem seems binary. Either yes a TLU can approximate well this function, or no a TLU cannot approximate this function. In fact, the answer depends on the input space, the sensorium of the neuron. When the input space of the neuron is made of all possible input vectors the answer is yes.  And when the input space is made of input vectors half made of ones the answer is no. The reason for that is not breaking news, but our point can be useful experimentally. It implies that to test the role of non-linearities in dendrites it is important to choose a particular subset of stimulations that corresponds to these vectors.

Despite the small attendance to the poster (SFN2013poster) -I like it- at least it triggered interesting exchanges of ideas with Simon (my new boss).

And now for something completly different: Cosyne2010B and Biocybernetics2013

This work started in a summer school in Okinawa where I had the chance to work with Matthijs Van Der Meer. Matt was the tutor of my project, originally this project was between Bayes and Pavlov but during our first gathering things changed a little bit. We were a group of student all with different projects and we began by introducing ourselves; Tali Sharot was one of the student, she talked about her work on optimism and her studies on the subject. She had already published a Nature paper on the subject in which she suggested that optimism, despite being suboptimal, exists to cope with the difficulty of our lifes (she also later gave a TED talk on the “optimism bias”). But “What if optimism was optimal?”, and this is how my project changed. She did not want to collaborate with us, but we successfully worked on this project with Matt.

Our work is showing that in certain situations optimism and even pessimism can be optimal features (the cosyne abstract:Cosyne2010B, the cosyne poster:Cosyne2010Bposter, and the Biological cybernetic aritcle:BioCyb2013). This theoretical study demonstrates that a differential learning rate for reward and punishment can generate optimism/pessimism that is a kind of classic and expected result. The originality of this study is to show that optimism/pessimism creates a distortion in how we evaluate our choices, and that this distortion is inhomogeneous turning tough choices into easy ones.

I enjoy working on this topic, as it is showing that human “defects” are indeed optimal. The aspect I like the most is the demonstration that pessimism can be optimal. In a situation where you are rewarded all the time, it yields more reward to be pessimist because the subject is more able to dissociate highly rewarded choices. This is a notable difference other studies where pessimism is clearly badly connoted. This is also easier to explain to my family than my work on dendrites, e.g. I wrote for fun my first outreach  text published in a newsletter of our bioengineering department (September 2013).

A useful complement to my Plos CB 2013 article: NIPS 2012

Writing this article was pleasant (NIPS2012). It is the outcome of a long thought process, and it helped me a lot in answering some comments from the reviewers of my Plos article.

It presents the convergences and divergences between sublinear and supralinear summation in dendrites. Sublinear and supralinear summation are equivalent, i.e. both types of summation enable neurons to compute linearly inseparable problems, supposing a sufficient (infinite) number of dendritic subunit. But they do it using different means detailed in the article.  It means that for certain problems sublinear summation is better and for others supralinear summation is better.

This article opens a tough question, in the case where the number of dendritic subunits is finite (like in real life). Is it possible to prove that a problem, e.g. a feature binding problem, is possible with supralinear summation and impossible with sublinear summation?

A long term work: Plos Computational Biology 2013

This article came out in 2013 (Ploscb2013), but the first draft dates back from the beginning of 2011. It took me a bit more than a year to write it with the invaluable help of Mark Humphries and the one of Boris Gutkin. The review and publication process took another year. Initially there was no biophysical model, and no general proposition by sublinear summation in dendrites. In other words, the intuition was here but no real results. The initial reviews were bad but constructive in some sense or another.

The main take home message of this work is: a neuron (model) made of a single passive dendrites, without voltage-gated channels, and a soma can compute linearly inseparable problems, e.g. the feature object binding problem. In other words, neurons are smarter than we thought originally, they are capable of linearly inseparable computations. The dogma that neurons are only capable of linearly separable computation is spurious.

This article also hides an “easter egg”. I focused in this work on positive Boolean function and the so-called Dedekind problem. After two years of efforts I managed to generate all the possible classes of these functions for n=7. There are 490,013,148 different classes of positive functions for n=7, a cumbersome number, 7,8 Go are necessary to store them. I described this generation in my thesis and uploaded the code I used.  Note that in 2012 Timothy Yusun parallely found this number and add it to OEIS (an international database of integers).

Design a site like this with WordPress.com
Get started