Home » Uncategorized

Category Archives: Uncategorized

Neurons Perform Better Logical Calculations Than Expected

Do artificial neurons underestimate the power of their biological counterparts? Ten years after the first theoretical predictions, researchers have shown that a single isolated neuron can carry out a calculation once thought impossible. Published in Scientific Reports, these findings suggest that employing more complex artificial neurons could improve the efficiency of neural networks.

If neural networks solve all kinds of computations, what can a solitary neuron do? Models that have been in place for more than half a century estimate that its capabilities remain limited, and current artificial intelligences are built on that assumption. Yet those systems are especially energy‑hungry and comparatively inefficient relative to what nature achieves, leading some scientists to suspect that a neuron actually has more tricks up its sleeve. Researchers from the Institute of Electronics, Microelectronics and Nanotechnology (IEMN, CNRS/University of Lille/Polytechnic University of Hauts‑de‑France), the Laboratory of Cognitive and Computational Neuroscience (LNC2, INSERM/ENS–PSL), and the Genes, Synapses and Cognition laboratory (GSC, CNRS/Institut Pasteur) have demonstrated that an isolated neuron is capable of far more logical calculations than commonly believed.

These belong to the class of linearly non‑separable operations, which classic models of isolated artificial neurons cannot solve. This limitation may be one reason for the poor energy efficiency of today’s networks. In the study, biological neurons solved on their own a problem that requires responding only when excitatory stimuli encoding shapes are paired with others encoding colors. To make the neurons compute, they were stimulated with an excitatory neurotransmitter, glutamate. The glutamate was trapped in “cages” and released by laser beams, allowing precise control over neuronal activation. The distinct locations where glutamate acted enabled the neuron to perform the desired operation. An electrode attached to the neuron confirmed that it fired precisely when the correct conditions were met, which it did reliably.

The IEMN team now aims to determine whether artificial neurons endowed with this capability would give rise to more performant artificial neural networks.Cette réponse me convient.

Microscopy image (two‑photon excitation) of a neuron used in the study.
The coloured dots on branches 1 and 2 indicate the sites that were excited by glutamate; the dots on branch 3 were activated by electrical stimulation. The scale bar represents 20 µm. The large white element corresponds to the pipette that records the neuron’s activity.

Références
Demonstration that sublinear dendrites enable linearly non‑separable computations.
Romain D. Cazé, Alexandra Tran‑Van‑Minh, Boris S. Gutkin & David A. DiGregorio.
Scientific Reports volume 14, Article number: 18226 (2024).
https://doi.org/10.1038/s41598-024-65866-9

In French: https://www.insis.cnrs.fr/fr/cnrsinfo/les-neurones-effectuent-de-meilleurs-calculs-logiques-que-prevu

Book chapter

An post to point toward the book chapter I wrote at the end of my PhD.Caze2013Chap

Even a passive dendrite extends neuron’s computation capacity

A neuron possesses receptive organs called dendrites that are active or passive. Active dendrites can sum excitatory inputs both below and above their arithmetic sum. This feature turns a neuron in a two-layer neural network capable of universal computation. Linearly separable computations previously define what a single neuron could do. Some neuron, however, lacks active dendrites. We thus wonder here what happens when dendrites are passive and can only sum two excitatory inputs below their arithmetic sum. We enumerate parameter sets and focus on excitatory inputs to determine how many computations can be implemented by a neuron model with either an active or passive dendrite. We then analytically generalize these numerical results to an arbitrary number of dendrites. First, we show that a single dendrite either passive or active suffices to compute linearly non-separable computations. Second, we analytically prove that a sufficient number of passive dendrites enable a neuron to be universal for positive computations. Third, we show how a neuron can implement these computations using two distinct strategies: 1) Where a single dendrite suffices to trigger a somatic spike; 2) where somatic spiking requires the cooperation of multiple dendrites. Only a neuron with active dendrites can use the strategy (1) while neuron with either passive or active dendrites can use strategy (2). Finally, we employ strategy (2) to implement a linearly non-separable function in a biophysical model with passive dendrites inspired by stellate cerebellar interneurons. We show here that even passive dendrites enable a neuron to extend its computation capacity well beyond what we previously thought.

A mountain becoming flat underwater.

Some neurons respond preferentially to certain stimuli, like a sound or a picture reminding Jennifer Anniston. Hubel and Wiesel obtained the Nobel price, some 50 years ago, for the discovery of stimulus selective neurons in cats’ visual cortex and for the model associated with this discovery. This model shines by its simplicity. Imagine a mountain, each coordinates corresponds to a (visual) stimulus, the altitude associated to them corresponds to how strong a neuron responds to this stimulus. In more technical terms, the height of a given point equals the depolarization created by this stimulus. Now imagine that this mountain sits in the middle of a sea. The tip of the mountain outside of the water is the supra-threshold response, i.e. an activity level sufficient to trigger a neural activity noticeable by the rest of the brain. This metaphor seems a good way to understand neuron stimulus selectivity, but like all models it cannot explain everything.

Electrophysiology recently made astounding progresses and it is now possible to hyperpolarize a neuron in vivo. Using our metaphor it becomes feasible to increase the sea level. Scientists used this technique and made an unexpected observation.  When they hyperpolarize a neuron -increase the sea level-, the mountain, as it goes underwater, becomes flat. Meaning that the neuron loses its selectivity and responds equally to all stimuli. Why? This is the topic of one of my current project. But I will talk more about it in another post, where I will try to explain why a neuron might start to respond as strong to Jennifer Anniston as to any other Hollywood inhabitants.

Indifference (in Science) can sometimes be frustrating.

Motivated by my last post, I decided to more regularly update my blog.

In this post I am not asking a question but write about my life in Science and
my experience as a young researcher.

I enjoy a lot my life in Science, so I will try not to complain too much
(pledge), but life in science can sometimes be frustrating. I often
hear that the life of a young researcher can be difficult because of the
constant struggle. That he or she always has to fight against the existing
dogmas or ideas held by researchers higher up in the hierarchy. I tend to
disagree. Fighting against someone is motivating. You are never as
courageous as when you have a mighty opponent. Even a fierce and overwhelming
opponent that has much more means than you have. Worthwhile struggle is much
less frustrating than passive indifference. Science is just one area touched by
indifference and in our society overflowed by sounds, images and information
indifference sometimes is the only defense. So, I understand why people could
be indifferent. Still, I realize more and more that my frustration most often
comes from this indifference. Is there a solution? Well, it might be a clunky
solution but it is the only one I found.  Indifference. I just say to myself
that if my work is worth something then someday, somewhere, somebody will use
it. Today I have a decent place to live, someone I love to live with, food in
my plate and people let me do what I love to do -Science. So no complain really.

A NEURON + Python tutorial

I had the chance to give during the first week at OCNC (Okinawa Computational Neuroscience Course) a tutorial on NEURON neuron_tuto. This tutorial contains some self-advertisement ;), it demonstrates that a neuron with two passive dendrites can compute a linearly non-separable function, i.e. the feature binding problem. I hope it will be useful for your work.

Neuron, human, brain and society.

Yesterday I had dinner with some friends, and to explain my work I used an analogy that they liked. I used this analogy many times to justify my point of view. I am convinced that neurons are smarter than we thought, way smarter, and that it should change our views on the brain. I spent my PhD trying to define “smart” and to demonstrate this intelligence (see review below). I believe that our brain is capable of amazing things, e.g. being goal directed, because neurons are capable of these amazing things. In other words, that a neuron is as complex as a brain. It may seem a weird opinion, but I found an analogy/question making this proposition less strange.  Do you think that a society is more complex than a human? I do not think so. I think that a human/neuron is as complex as a society/brain (even if there are complex surely in different ways).

Our discovery -presented in this review wrote with some experimental collaborators Tran-van-Minh et al._2015– demonstrates that sublinear summation in dendrites makes a neuron more intelligent than we thought. And I am convinced that it is one step toward my proposition about neurons and humans.

EITN, Cosyne, and a submitted article

I presented this work for the first time in Paris. I undertook it in collaboration with Dr Claudia Clopath and Dr Simon Schultz and it was presented at a workshop on dendritic computation at the new EITN in Paris.

In this work we present a neuron model that can display both synaptic clustering and scattering. Simultaneously active synapses cluster if they are co-localized on the same dendritic branch (in a ~10-20 microns radius on dendrites). We say that active synapses scatter when they are located on different dendrites. Both observations are possible in the same type of neuron, e.g. a pyramidal neuron from the upper layer (II/III) of the cortex. In this work we conciliate the two sets of observations. In a sentence, we demonstrate, with a model, that clustered synapses could be useful during learning while scattered synapses are useful during sensing.  This goes well with the fact that scattered synapses are observed more during a sensory-evoked episode.

This work is important to me because I have shown during my PhD that dendrites enables to compute amazing things (linearly non-separable computations). In other words, I showed that a plane (neuron) can fly high. Here I am showing how this plane can gain altitude (plasticity). With this work I am showing how a neuron can learn to do these amazing things and also I am explaining some strange pieces of experimental data, killing two birds with one stone.

I recently submitted this work to the Cosyne conference where it was accepted (the extended abstract is here 14_11CosyneA). I am looking forward to present this work in Salt Lake City!

I also submitted this work to a scientific journal! Fingers crossed for it, I hope it will go through to the review process at least.

Prix Le Monde (That I did not obtain).

I am uploading here the text I submitted to Le Monde (in French PrixLeMondeRC), for its Prix de These. I did not receive any feedback from them, so it is hard for me to know why they did not like the project. Rereading it, I like this text (quite rare), in particular the comparison between the brain and Internet.  I wrote that connections between computers is a necessary condition for Internet to exist. But these dense connections are insufficient to explain its power. The phone network is also densely connected. But Internet is not only a communication network, it requires powerful computers to be useful. Likewise the brain needs “smart” neurons to be smart itself. In other words, that brain’s capacities might not only come from the dense connection between neurons, but reside in neurons. What is wrong in this comparison? I might know one day.

The tuning of the neural response

My very few followers might have wondered where I was (or not). Anyway, I have no answer to this question that could make a sensible story. I presented, however, a story that I very much like. I was in Crete for the dendrites 2014 meeting where I presented my work about on … dendrites and neural tuning. I changed my presentation at the last minute to remove the overlap with Dr Micheal Hausser presentation (mostly background). As you read me I am working on a manuscript to submit before the end of the year on this topic (eventually).

This is important to look at neural tuning because it is a known feature of the nervous system. Saying that: “dendrites tune neuron” is much more factual than “dendrites enable to compute linearly non-separable functions”. It is a step forward from my PhD work and make it a bit more concrete. Here is the abstract (2014Crete), I am certain you are impatient to read it!

Design a site like this with WordPress.com
Get started