Skip to content

Comments

Throttle Zaptec Cloud requests to avoid hitting rate limits (fixes #165)#190

Merged
sveinse merged 2 commits intocustom-components:masterfrom
feliciaan:master
Jul 14, 2025
Merged

Throttle Zaptec Cloud requests to avoid hitting rate limits (fixes #165)#190
sveinse merged 2 commits intocustom-components:masterfrom
feliciaan:master

Conversation

@feliciaan
Copy link
Contributor

Hello

I was running into the same issue described in #165 when using multiple Zaptec chargers — getting frequent HTTP 429 errors (“Too Many Requests”).

To fix this, I added rate limiting using the aiolimiter package, which helps queue and throttle the number of requests sent. This keeps things within Zaptec’s allowed request limits and avoids spamming retries.

This fixes it on my system with 15 Zaptec chargers connected.

@sveinse
Copy link
Collaborator

sveinse commented Jul 11, 2025

Thank you for the contribution. This fits into the efforts to adopt to Zaptec API fair use policy as tracked in #188.

The plans are to significantly reduce the polling interval. The policy hints at 60 minutes being reasonable, while we are using 60 seconds.

May I ask how many chargers you have? Do you use/have service bus available for all of them? The polling interval shouldn't be too high if there is no service bus available.

@sveinse sveinse added this to the v0.8 milestone Jul 11, 2025
@feliciaan
Copy link
Contributor Author

I believe the service bus is working, but is there a recommended way to verify that for sure?

The main benefit of the rate limiter is that, currently, whenever an update is triggered, requests for all chargers are sent at the same time. This creates a burst of requests on every poll, regardless of the polling interval. It’s usually not an issue with a small number of chargers, but once you go beyond 5 or 6—like in my case, with around 30 chargers—it quickly leads to hitting rate limits.

@sveinse
Copy link
Collaborator

sveinse commented Jul 11, 2025

There are three sources of events/changes:

  • Poll (by interval) - this is global across all chargers
  • Poll due to updated HA entities. This is where we can be smarter about it and only poll the affected charger and not all, as you point out
  • Push events from Zaptec via the service bus

We definitely need a burst limiter like proposed, because atm I cannot see that we can be without polling.

We definitely need to refactor a few things to be compliant with the API policy. Are you willing and able to test alpha/beta versions when they are available? I don't have access to more than one, so its hard to verify how it works with many.

Copy link
Collaborator

@sveinse sveinse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I think of it, it would have been nice if the rate limiter and the exponential backoff were a part of the same system. I wonder, does it exist something that provides both, or are we good by having them separate?

@sveinse
Copy link
Collaborator

sveinse commented Jul 14, 2025

The current design in _retry_request() have a mechanism for exponential back-off. I'm not overly fond of having many different rate-limiting systems working all at once. I like the idea of AsyncLimiter() which can be used in a context. Does it exist any mechanisms that can offer both burst limiting and exponential back-off on retries? If not, should this be isolated out to a separate function which includes both?

Copy link
Collaborator

@sveinse sveinse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks good. I've tested it and the throttling seems to work fine. I'd like to change a little bit on how the exponential back-off is implemented, but we'll do that in a follow-up PR. Approved.

@sveinse sveinse merged commit 37fc5c8 into custom-components:master Jul 14, 2025
1 check passed
@sveinse
Copy link
Collaborator

sveinse commented Jul 14, 2025

Thank you for the contributions @feliciaan . Much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants