Skip to content

NYU-ICL/ML-PEA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ML-PEA: Machine Learning-Based Perceptual Algorithms for Display Power Optimization

Kenneth Chen1, Nathan Matsuda2, Thomas Wan2, Ajit Ninan2, Alexandre Chapiro2 Qi Sun1

1 NYU logo2 Intel logo

project page

Our pipeline generates images which consume less power than the original when shown on a display, while minimizing perceptual impact. Here, we show an example of an image generated with our technique compared to the reference and its uniformly dimmed version. The corresponding dimming maps are shown in the insets, with the multiplicative scaling factor presented in the color bar on the right. Note that both the uniformly dimmed image and the image generated with our technique in this figure consume the same amount of display power: 52.1% of the reference.

Abstract

Image processing techniques can be used to modulate the pixel intensities of an image to reduce the power consumption of the display device. A simple example of this consists of uniformly dimming the entire image. Such algorithms should strive to minimize the impact on image quality while maximizing power savings. Techniques based on heuristics or human perception have been proposed, both for traditional flat panel displays and modern display modalities such as virtual and augmented reality (VR/AR). In this paper, we focus on developing and evaluating display power-saving techniques that use machine learning (ML). This pipeline was validated via quantitative analysis using metrics and through a subjective study. Our results show that participants prefer our technique over a uniform dimming baseline for high target power saving conditions. In the future, this work should serve as a template and baseline for future applications of deep learning for display power optimization.

Quick Startup

To train a model, run the following command:

python train.py --w_vgg 0.5 --w_ssim 5 --w_power 50 --method MULT --dataset div2k

where w_vgg, w_ssim, and w_power are weights on the VGG, SSIM, and power losses, respectively. --method MULT sets the dimming map modulation to multiplicative, I_new = I * dimming_map. Specify the training dataset (e.g. div2k). Place dataset images in div2k/train/*.png and div2k/test/*.png.

Checkpoints

Saved checkpoints are found at our Google Drive link.

Acknowledgements

Thank you to Tina Su for help running experiments, and to the user study participants for their time. This research is partially supported by National Science Foundation award #2225861.

Contact

Contact Kenneth Chen ([email protected]) with any questions about the codebase.

Related Projects

PEA-PODs: Perceptual Evaluation of Algorithms for Power Optimization in XR Displays, SIGGRAPH 2024. Kenneth Chen, Thomas Wan, Nathan Matsuda, Ajit Ninan, Alexandre Chapiro†, Qi Sun†.

Color-Perception-Guided Display Power Reduction for Virtual Reality, SIGGRAPH Asia 2022. Budmonde Duinkharjav*, Kenneth Chen*, Abhishek Tyagi, Jiayi He, Yuhao Zhu†, Qi Sun†.

About

Official code release for "ML-PEA: Machine Learning-Based Perceptual Algorithms for Display Power Optimization", published at Eurographics 2026

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages