ReChar: Revitalising Characters with Structure-Preserved and User-Specified Aesthetic Enhancements

SIGGRAPH Asia 2025
1BCML, Heriot-Watt University,
2Imperial College London,
3Xi'an YingCuiSiTing Electronic Information Technology Co., Ltd.,
4Nanyang Technological University,
5Southern University of Science and Technology,
6Xidian University,
7University of Exeter

Corresponding author
MY ALT TEXT

Results demonstrate the practicality and generalizability of ReChar across various reference style images, characters, and prompts.

Abstract

Despite recent advances in generative models, artistic character generation remains an open problem. The key challenge is to balance the preservation of character structures to ensure integrity while incorporating aesthetic enhancements, which can be broadly categorized into visual styles and user-specified decorative elements. To address this, we propose ReChar, a plug-and-play framework composed of three complementary modules that preserve structure, extract style, and generate decorative elements. These modules are integrated via a fusion model to enable precise and coherent artistic character generation. To systematically evaluate artistic character generation, we introduce ImageNet-ReChar, the first large-scale benchmark for this task, covering multiple writing systems, diverse visual styles, and over 1,000 semantically grounded decorative prompts. Extensive experiments show that ReChar outperforms state-of-the-art baselines in structural integrity, stylistic fidelity, and prompt adherence, achieving an SSIM of 0.8690 and over 93% human preference across all criteria.

Method

MY ALT TEXT

Our ReChar Framework. ReChar integrates three distinct yet interrelated modules: (1) a character structure extraction module, which is designed to preserve the integrity of the character's form, (2) an element generation module, responsible for producing user-defined decorative elements based on textual input, and (3) a style extraction module, aimed at capturing the visual style from a reference image provided by the user. These components are subsequently fused in a controllable synthesis step, which enables flexible and user-customized image generation. To provide a clearer understanding of our approach, we will illustrate the generation process of an instance through a detailed case study.

BibTeX

BibTex Code Here