Skip to content

Conversation

@EmileAydar
Copy link
Contributor

What does this PR do?

As suggested in this issue - #36979 (comment) - this PR rewrites the altCLIP model card so it matches the standardized template that @stevhliu introduced in #36979.

Key points:

  • Friend-style overview + sidebar badges.
  • usage tabs — Pipeline tab explains why it’s missing, AutoModel tab contains a runnable similarity example, transformers-cli tab shows one-liner quantisation.
  • Quantisation section with dynamic INT-8 snippet (Linear-only, Embeddings left FP32) and link to the Quantization guide.
  • Attention visualisation section: explains current AttentionMaskVisualizer limitation and shows a manual ViT-attention workaround; embeds a PNG heat-map (CLS attention, last layer, head
  • All text follows Steve’s template verbatim where required.

Check list

  • [DONE] Brief model description
  • [ DONE] Ready-to-use code examples (Pipeline ✗, AutoModel ✓, transformers-cli ✓)
  • [ DONE] Quantisation example for a large model (dynamic INT-8)
  • [DONE ] Attention mask visualiser (note + workaround ) # I did not provide an image because it is not produced from the HF visualizer, but I can if required.

Who can review?

Hi @stevhliu !

I updated the altCLIP model card following the standard format.

Let me know what you think of it.

Happy to revise. Thanks!

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, added some suggestions to make it simpler!

Comment on lines 58 to 68
<hfoption id="transformers-cli">

```python
>>> from PIL import Image
>>> import requests
altCLIP does **not** require `transformers-cli` at inference time, but the tool is handy for quantisation (see next section).

>>> from transformers import AltCLIPModel, AltCLIPProcessor
</hfoption>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this as well since it isn't supported

---

[[autodoc]] AltCLIPProcessor
## Attention visualisation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed since it isn't supported

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, there are still some unresolved comments. You also don't need to modify the other files testing_utils.py and modeling_deit.py.

@EmileAydar EmileAydar reopened this Jun 11, 2025
@EmileAydar
Copy link
Contributor Author

Hi @stevhliu,
The checks seem to be all valid now.
Please let me know if you’d like any further tweaks or if you think this is good to go.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@stevhliu stevhliu merged commit 32dbf4b into huggingface:main Jun 11, 2025
10 checks passed
lmarshall12 pushed a commit to lmarshall12/transformers that referenced this pull request Jun 12, 2025
* Update altclip.md

* Update altclip.md

* Update altclip.md

* Update altclip.md

* Update altclip.md

* Update altclip.md

* Rename altclip.md to altclip.mdx

* Rename altclip.mdx to altclip.md

* Update altclip.md

* Update altclip.md

* Update altclip.md

---------

Co-authored-by: Steven Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants