Skip to content

CLOUDNS: pause when API fails due to rate limit#3962

Merged
tlimoncelli merged 2 commits intoStackExchange:mainfrom
RobinDaugherty:feat/cloudns-rate-limit
Jan 6, 2026
Merged

CLOUDNS: pause when API fails due to rate limit#3962
tlimoncelli merged 2 commits intoStackExchange:mainfrom
RobinDaugherty:feat/cloudns-rate-limit

Conversation

@RobinDaugherty
Copy link
Copy Markdown
Contributor

There was already a Limiter in use here to keep the rate of requests below the apparent limit.

The ClouDNS API doesn't give any sort of proper API response when rate limit is reached. It's a 200 status code and an error message in the JSON body, and no headers that would help to track or back off for the right amount of time.

There was a comment in the implementation that mentions an undocumented 10-per-second limit, while the error message they give today says that the limit is 20 per second. I kept the settings on the Limiter the same since 10 per second should be plenty fast.

But it will now retry the request when the rate limit is reached. At the same time, it "steals" some reservations on rate.Limiter to quiet other concurrent ClouDNS API calls for about half a second. This seems to be plenty to fix my rate-limit issues. (I tested with 20 domains with ClouDNS as both registrar and DNS provider using the functionality in #3961.)

When rate limit is reached, it emits a warn-level message. This follows a pattern I see in adguardhome and desec providers but I don't love it—it's less important and less actionable than other warn-level messages in the project.

@tlimoncelli
Copy link
Copy Markdown
Contributor

CC @pragmaton

@tlimoncelli
Copy link
Copy Markdown
Contributor

Looks good! Thanks for fixing this!

@tlimoncelli tlimoncelli merged commit 590774f into StackExchange:main Jan 6, 2026
@RobinDaugherty RobinDaugherty deleted the feat/cloudns-rate-limit branch January 7, 2026 09:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants