Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage.py
Outdated
Show resolved
Hide resolved
sayakpaul
left a comment
There was a problem hiding this comment.
Looks quite ready! My comments are mostly minor apart from some suggestions on potentially reducing some code (definitely not merge-blocking).
Let's also add tests and a doc page entry 👀
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage_refiner.py
Outdated
Show resolved
Hide resolved
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage.py
Outdated
Show resolved
Hide resolved
| return h | ||
|
|
||
|
|
||
| class AutoencoderKLHunyuanImage(ModelMixin, ConfigMixin, FromOriginalModelMixin): |
There was a problem hiding this comment.
In order for FromOriginalModelMixin to work properly do we not have to add a mapping function in the single_utils.py? Cc: @DN6
There was a problem hiding this comment.
Yeah can remove if single file support for this isn't needed. Or we add in a follow up if it is
There was a problem hiding this comment.
Considering how big the model is I would imagine GGUF support would be a reason to support single file.
There was a problem hiding this comment.
You can do your own GGUFs out of diffusers checkpoints:
https://huggingface.co/docs/diffusers/main/en/quantization/gguf#convert-to-gguf
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage_refiner.py
Outdated
Show resolved
Hide resolved
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage_refiner.py
Show resolved
Hide resolved
| return hidden_states, encoder_hidden_states | ||
|
|
||
|
|
||
| class HunyuanImageTransformer2DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin): |
There was a problem hiding this comment.
If we subclass from AttentionMixin, I think utilities like attn_processors will become available automatically and we won't have to implement them here.:
Cc: @DN6
| hidden_size = num_attention_heads * attention_head_dim | ||
| mlp_dim = int(hidden_size * mlp_ratio) | ||
|
|
||
| self.attn = Attention( |
There was a problem hiding this comment.
Not a merge blocker but we could consider doing HunyuanImageAttention like:
Happy to open a PR myself as a followup.
|
hunyuan21, this branch seems was not merged sucessfully, is there any action for next step? |
fix #12321
fast test
Click to expand
guider support
This PR adds guider support to HunyuanImage pipeline(requested by Hunyuan team). This is the first pipeline to use guiders and sets the pattern for future pipelines. I've attached a test sript that covers main usage patterns