Skip to content

Conversation

@monstruosoft
Copy link
Contributor

I finally "upgraded" my PC, it's still over a decade old and can only run SD in CPU mode but at least now I have enough RAM to run and make some tests with SDXL.

This PR adds SDXL safetensors model support in LCM-LoRA mode. Much like what #316 did for LCM models, this commit looks for 'XL' in the model name. Is this really the best way to find out if a model is SDXL-based? A lot of Illustrious- and Pony-based models won't fit that naming, but for the time being you'll have to rename your models to include 'XL' in the name.
This commit also adds the option to use a safetensors model for the LCM-LoRA model. This allows, for example, to use the DMD2 LoRA. Of course, you could use the HF repo name but then the model name had to be hardcoded, so using the safetensors option seemed like the best solution.
With this commit, SDXL LoRA support should work out fo the box. SDXL ControlNet support might require some work but it should be relatively easy to add.

Add SDXL safetensors model support in LCM-Lora mode
@monstruosoft
Copy link
Contributor Author

monstruosoft commented Aug 19, 2025

Please note, I've found an issue with the Qt GUI that causes FastSD to load a new pipeline without releasing the memory from the previously created pipeline. This is issue is not caused by this commit and apparently was present prior to this PR but with SDXL it can cause FastSD to consume huge amounts of RAM.
I've already identified what's causing this issue but it's giving me some headaches to fix it.
[EDIT:] I've created a new PR that fixes this issue.

Basic SDXL ControlNet support in LCM-LoRA mode
@monstruosoft
Copy link
Contributor Author

monstruosoft commented Aug 22, 2025

Latest commit fixes ControlNet support for SDXL in LCM-LoRA mode. ControlNet support in FastSD is basic, more so for SDXL. I ran some tests with this ControlNet model. Also, note that if using ControlNet with SDXL, you have to disable the Tiny AutoEncoder to prevent the following error:

Error in generating images: Tiny autoencoder not available for the pipeline class
StableDiffusionXLControlNetPipeline!

This is not a problem with FastSD, it seems to be an ongoing unresolved issue in the diffusers' StableDiffusionXLControlNetPipeline.

@Real-Mo7a
Copy link

Hi, I replaced the two files lcm.py and lcm_lora.py with the updated ones, but when I try to run the project I’m still getting the same error.(typeerror: argument of type 'nonetype' is not iterable)

The model I’m currently trying to run is waiNSFWIllustrious, and I want to use it with the LCM-LoRA mode.

Is there something I might have missed, or did I do something wrong?

@monstruosoft
Copy link
Contributor Author

For LCM-LoRA mode you have to set the LCM-Lora model to latent-consistency/lcm-lora-sdxl when using SDXL-based models. Other than that, it's hard to tell what's going on from that error on your post alone. I've seen that error when generation fails and the GUI attempts to iterate over an empty list of images. Make sure to disable ControlNet to begin with.

@rupeshs
Copy link
Owner

rupeshs commented Aug 30, 2025

Hi, I replaced the two files lcm.py and lcm_lora.py with the updated ones, but when I try to run the project I’m still getting the same error.(typeerror: argument of type 'nonetype' is not iterable)

The model I’m currently trying to run is waiNSFWIllustrious, and I want to use it with the LCM-LoRA mode.

Is there something I might have missed, or did I do something wrong?

@Real-Mo7a It is working without any issue hope you are using this model https://civitai.com/models/827184?modelVersionId=1761560
Also ensure that you are using sdxl LCM lora model
image

@rupeshs
Copy link
Owner

rupeshs commented Aug 30, 2025

@monstruosoft Thanks for the PR

@rupeshs rupeshs merged commit 94680df into rupeshs:main Aug 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants