Latest news

[Dec 25] I have been promoted to the rank of Full Professor

[Nov 25] Article accepted for publication at Neurocomputing: TEP-ones: A Simple yet Effective Approach for Transferability Estimation of Pruned Backbones

[Nov 25] Article accepted for publication at IEEE Transactions on Multimedia: Security and Real-time FPGA integration for Learned Image Compression

[Nov 25] Article accepted for publication at WACV 2026: How I Met Your Bias: Investigating Bias Amplification in Diffusion Models](https://arxiv.org/pdf/2512.20233)

[Nov 25] I have become AE for Transactions on Machine Learning Research

[Nov 25] Article accepted for publication at AAAI 2026: I-INR: Iterative Implicit Neural Representations

Professorships, PhDs and stages applications are now open!

Two assistant professor positions (on traditional compression and animation with AI) will open soon!


PhD positions are now available spanning multimodality and efficiency! Reach out for more info


Looking for stagiaires! Email for more info!


Mission

In a world where deep learning is becoming more and more state-of-the-art, where the race to the computational capabilities determines the new technologies, it is crucial to open the black box deep learning is. Many good-willing researchers are already moving important steps in such direction, despite a wide multitiude and hetereogeneity of scientific backgrounds. This is good, this is progress!
We target it in the long term developing techniques which simplify these models. Some are easier to prune than others: why? How is the information being processed inside a deep model, from a macroscopic perspective? These are few of the questions to be answered to move in the right direction!

Green AI

Remotion of unnecessary neurons and/or synapses towards reduction of power consumption.

Model debiasing

Understand biases in data and cure the trained model.

Privacy in AI

Guaranteeing privacy in AI will be an important theme in the next years.

Understand the information flow

Modeling how the information is processed in deep models is our final goal.

Currently working with

Ekaterina Iakovleva

PostDoc
Deep Learning Unlearning

Frédéric Lauron

Research Engineer
Energy Consumption of Deep Learning

Imad Eddine Marouf

PhD student
Efficient transformers for computer vision
Previously: Stage M2 Feb-Sep 2022

Victor Quétu

PhD student
Regularization for deep learning


Yinghao Wang

PhD student
Foundation models for EEG treatment


Zhu Liao

PhD student
Deep Neural Network pruning



Dorian Gailhard

PhD student
Generative models and Graph Neural Networks for SoCs

Lê Trung Nguyen

PhD student
Methods for on-device training

Ivan Luiz De Moura Matos

PhD student
Debiasing through NAS
Previously: Stage M2 Oct 23-Feb 24

Leonardo Magliolo

PhD student
Information flow in Deep Neural Networks

Rayyan Ahmed

PhD student
Explaining and Removing Social Biases in Text-to-Image Generative AI

Cem Eteke

Invited PhD student
Compression of 3D Gaussian Splatting from a Frame Restoration Perspective

Charles Herr

Research path student
Learning alternatives to backpropagation

Lorenza Martins

PRIM project student
Learn how to sample debiasing masks for foundation models

Formerly advising

  • Marta Milovanovic

    (PostDoc 2022/23)
    Main achievement: written a paper, published at ICMEW 2023
  • Giommaria Pilo

    (Research Engineer 2023/24)
    Main achievement: development to the layer fold library, contributed to one paper published at VCIP 2024 ,
  • Gabriele Spadaro

    (PhD Student 2022/25)
    "Compression with Graph Neural Networks"
    Cotutelle with University of Turin, Ph.D. awarded with honors
    Main achievements: one A* publication, one Q1 journal as first author, three rank A conferences (two as first author, one of which oral- ie. top 3%), filed seven patents.
  • Aël Quélennec

    (PhD Student 2022/25)
    "Energy and Memory-Efficient AI for On-Device Learning"
    Main achievements: two A* publications as second author, one workshop paper.
  • Rémi Nahon

    (PhD Student 2022/25)
    "Debiasing in Deep Neural Networks"
    Main achievements: two A* publications as first author, two workshop papers.
  • Carl De Sousa Trias

    (PhD Student 2021/25)
    "Watermarking deep models"
    Main achievements: one patent, one survey, one paper at AAAI 2024 (rank A*) and one paper at ICPR 2024 (rank A)
  • Melan Vijayaratnam

    (PhD Student 2020/24)
    "Zero-latency video prediction"
    Main achievement: one patent, written two papers
  • Muhammad Ali Salman

    (Invited PhD student 2023/24)
    Main achievement: written three papers, published at BMVC 2024 (rank A) and another published at WACV 2025 (rank A)
  • Petro Shulzhenko

    (PRIM project, Oct 2024 - Jul 2025)
    "Distilling depth-compression"
  • Nathan Roos

    (Research path student 2024/25)
    "Unsupervised debiasing modelization"
    Main achievement: written a paper, accepted at WACV 2026 (rank A)
  • Haicheng Wang and Zhemeng Yu

    (PRIM project, Oct 2024 - Feb 2025)
    "Deep neural network pruning"
    Main achievement: written a paper, accepted at ICCV 2025 (rank A*)
  • Wenqing Zhang

    (Stage M1, Apr-Aug 2025)
    "Dataset pruning"
  • Huanshan Huang

    (Stage M1, Apr-Aug 2025)
    "Efficient methods for backproagation in resource-constrained environments"
  • Sana Hagaza

    (Stage M1, Jui-Sep 2025)
    "Model soups for language models"
  • Nour Hezbri

    (Stage M1, Mar-Aug 2024)
    "Compression with artificial neural networks"
    Main achievement: contributed to three papers, one journal paper, one published at AAAI 2025 (rank A*), and another published at ICCV 2025 (rank A*)
  • Maxime Girard

    (Research path student 2023/24)
    Main achievement: written a paper, published at ECCV - workshop 2024
  • Ivan Khodakov

    (Free stage, Sep 2024-Feb 2025)
    "Methods for debiasing and beyond"
  • Roan Rubiales

    (Research path student 2023/24)
    "Efficient on-device learning"
  • André Pereira e Ferreira

    (PRIM project Nov 2023-Feb 2024)
    Main achievement: contributed to the layer fold library development.
  • Nasim Bagheri Shouraki

    (Stage M2 Jun-Aug 2023)
    "EEG mental state classification information leakage"
    Main achievement: written a paper, published at ECML-PKDD - Workshop track 2023

  • Ziyu Li

    (Stage M1 Mar-Aug 2023)
    Main achievement: written a paper, published at ICCV - Workshop 2023

  • Chenxi "Lola" Deng

    (Stage M1 May-Aug 2022)
    Main achievement: written a paper, published at WACV 2023 (rank A)

  • Paul Mortamet

    (PRIM project Nov 2022-Feb 2023)
    "IBRNet with reduced pixel density"

  • Thierry Xu

    (Stage M1 Jul-Sep 2023)
    "Image compression with neural networks- sota analysis and preliminary results"