Publication list

I finally updated my publication list PubList. This is a satisfying thing to do, and it also makes me think about the way I walked and the way which remained. This way is well summarized in the over-ambitious title of this blog. Looking at my publication list I guess I am trying to come from both end of the spectrum with a preference for the single neuron side.

EITN, Cosyne, and a submitted article

I presented this work for the first time in Paris. I undertook it in collaboration with Dr Claudia Clopath and Dr Simon Schultz and it was presented at a workshop on dendritic computation at the new EITN in Paris.

In this work we present a neuron model that can display both synaptic clustering and scattering. Simultaneously active synapses cluster if they are co-localized on the same dendritic branch (in a ~10-20 microns radius on dendrites). We say that active synapses scatter when they are located on different dendrites. Both observations are possible in the same type of neuron, e.g. a pyramidal neuron from the upper layer (II/III) of the cortex. In this work we conciliate the two sets of observations. In a sentence, we demonstrate, with a model, that clustered synapses could be useful during learning while scattered synapses are useful during sensing.  This goes well with the fact that scattered synapses are observed more during a sensory-evoked episode.

This work is important to me because I have shown during my PhD that dendrites enables to compute amazing things (linearly non-separable computations). In other words, I showed that a plane (neuron) can fly high. Here I am showing how this plane can gain altitude (plasticity). With this work I am showing how a neuron can learn to do these amazing things and also I am explaining some strange pieces of experimental data, killing two birds with one stone.

I recently submitted this work to the Cosyne conference where it was accepted (the extended abstract is here 14_11CosyneA). I am looking forward to present this work in Salt Lake City!

I also submitted this work to a scientific journal! Fingers crossed for it, I hope it will go through to the review process at least.

Theshold logic at the MIT

I had been invited to speak early November 2014 in a symposium in Boston at MIT. Its organizers name it Biologically Inspired Computation Architecture. They also invited me to be part of the organization commitee for one of the symposium on threshold logic. It was a good opportunity to present, for the first time, my very own work on threshold logic. I upload here the abstract I submitted 14_09BICA. I would like first to thank the Marie Curie Program who helped me to go there as well as Simon Schultz who oversees my work at Imperial, they allowed me to present this work. Second, I want to say a bit more about this abstract. It deals with how much storage capacity you gain by adding one, two or three dendritic subunits to a “classic” neuron model. I demonstrate with this work that if you have two non-linearities instead of one, yes you double the number of paramters, but you more than double the storage capacity of the neuron. In other words you have more for less. I see this work as a plaidoyer to work on networks made by this type of “dendrited” unit (a bit like the book chapter I wrote at the end of my PhD).

 

 

Prix Le Monde (That I did not obtain).

I am uploading here the text I submitted to Le Monde (in French PrixLeMondeRC), for its Prix de These. I did not receive any feedback from them, so it is hard for me to know why they did not like the project. Rereading it, I like this text (quite rare), in particular the comparison between the brain and Internet.  I wrote that connections between computers is a necessary condition for Internet to exist. But these dense connections are insufficient to explain its power. The phone network is also densely connected. But Internet is not only a communication network, it requires powerful computers to be useful. Likewise the brain needs “smart” neurons to be smart itself. In other words, that brain’s capacities might not only come from the dense connection between neurons, but reside in neurons. What is wrong in this comparison? I might know one day.

The tuning of the neural response

My very few followers might have wondered where I was (or not). Anyway, I have no answer to this question that could make a sensible story. I presented, however, a story that I very much like. I was in Crete for the dendrites 2014 meeting where I presented my work about on … dendrites and neural tuning. I changed my presentation at the last minute to remove the overlap with Dr Micheal Hausser presentation (mostly background). As you read me I am working on a manuscript to submit before the end of the year on this topic (eventually).

This is important to look at neural tuning because it is a known feature of the nervous system. Saying that: “dendrites tune neuron” is much more factual than “dendrites enable to compute linearly non-separable functions”. It is a step forward from my PhD work and make it a bit more concrete. Here is the abstract (2014Crete), I am certain you are impatient to read it!

Uploading My CV

As I will soon apply for grants to extend my post-doc in Simon Schultz lab, I updated my CV (cv_rc) to include my new publications. If you want to know more about my professional experience, if you spot typos, or if you have suggestion please comment.

My postdoc first productions: SFN2013 abstract and IEEE2013 article

This work is a smooth transition between being a doctoral candidate and being a post doctoral fellow. This abstract (SFN2013) and this article (IEEE2013) both deal with how well a threshold linear unit (TLU) is capable of approximating the feature binding problem (FBP) -of two objects made of two independent features -. One might wonder why we wanted to answer this question; I chose it because it is trivial to compute the FBP when the neuron has a non-linear dendritic subunit. We already know from previous work that it is impossible for a TLU to perfectly solve this problem, but not if it can approximate the FBP. This is why, I think, it is interesting to wonder if a neuron without a non-linear dendritic subunit, e.g. a TLU, is capable of approximating the solution.

The answer to this problem seems binary. Either yes a TLU can approximate well this function, or no a TLU cannot approximate this function. In fact, the answer depends on the input space, the sensorium of the neuron. When the input space of the neuron is made of all possible input vectors the answer is yes.  And when the input space is made of input vectors half made of ones the answer is no. The reason for that is not breaking news, but our point can be useful experimentally. It implies that to test the role of non-linearities in dendrites it is important to choose a particular subset of stimulations that corresponds to these vectors.

Despite the small attendance to the poster (SFN2013poster) -I like it- at least it triggered interesting exchanges of ideas with Simon (my new boss).

And now for something completly different: Cosyne2010B and Biocybernetics2013

This work started in a summer school in Okinawa where I had the chance to work with Matthijs Van Der Meer. Matt was the tutor of my project, originally this project was between Bayes and Pavlov but during our first gathering things changed a little bit. We were a group of student all with different projects and we began by introducing ourselves; Tali Sharot was one of the student, she talked about her work on optimism and her studies on the subject. She had already published a Nature paper on the subject in which she suggested that optimism, despite being suboptimal, exists to cope with the difficulty of our lifes (she also later gave a TED talk on the “optimism bias”). But “What if optimism was optimal?”, and this is how my project changed. She did not want to collaborate with us, but we successfully worked on this project with Matt.

Our work is showing that in certain situations optimism and even pessimism can be optimal features (the cosyne abstract:Cosyne2010B, the cosyne poster:Cosyne2010Bposter, and the Biological cybernetic aritcle:BioCyb2013). This theoretical study demonstrates that a differential learning rate for reward and punishment can generate optimism/pessimism that is a kind of classic and expected result. The originality of this study is to show that optimism/pessimism creates a distortion in how we evaluate our choices, and that this distortion is inhomogeneous turning tough choices into easy ones.

I enjoy working on this topic, as it is showing that human “defects” are indeed optimal. The aspect I like the most is the demonstration that pessimism can be optimal. In a situation where you are rewarded all the time, it yields more reward to be pessimist because the subject is more able to dissociate highly rewarded choices. This is a notable difference other studies where pessimism is clearly badly connoted. This is also easier to explain to my family than my work on dendrites, e.g. I wrote for fun my first outreach  text published in a newsletter of our bioengineering department (September 2013).

SFN 2012: Saturations and the global strategy

This abstract, written for SFN, introduces the main result of my PLoS Computational Biology paper and makes a remark about neural tuning (SFN2012). I wrote this abstract while the NIPS and the PloS paper were under review. The poster (SFN2012poster) shows that a single saturating  dendritic subunit enables a neuron to compute linearly inseparable functions, but I presented also another result. This work also showed that a neuron computing such function is broadly tuned;  it is a prequel to what I am currently working on, i.e. neural tuning and coding. It demonstrates that a broad neural tuning is compatible with a positive impact of non-linear summation in dendrites.

A useful complement to my Plos CB 2013 article: NIPS 2012

Writing this article was pleasant (NIPS2012). It is the outcome of a long thought process, and it helped me a lot in answering some comments from the reviewers of my Plos article.

It presents the convergences and divergences between sublinear and supralinear summation in dendrites. Sublinear and supralinear summation are equivalent, i.e. both types of summation enable neurons to compute linearly inseparable problems, supposing a sufficient (infinite) number of dendritic subunit. But they do it using different means detailed in the article.  It means that for certain problems sublinear summation is better and for others supralinear summation is better.

This article opens a tough question, in the case where the number of dendritic subunits is finite (like in real life). Is it possible to prove that a problem, e.g. a feature binding problem, is possible with supralinear summation and impossible with sublinear summation?

Design a site like this with WordPress.com
Get started