Skip to content

rgw: control full sync on bucket replication enable#61003

Closed
clwluvw wants to merge 2 commits intoceph:mainfrom
clwluvw:bucket-full-sync-start
Closed

rgw: control full sync on bucket replication enable#61003
clwluvw wants to merge 2 commits intoceph:mainfrom
clwluvw:bucket-full-sync-start

Conversation

@clwluvw
Copy link
Member

@clwluvw clwluvw commented Dec 9, 2024

Introduced a "skip-existing-object-replication-policy" feature, allowing bucket replication to start incrementally for new changes and objects, bypassing a full sync of existing objects. The feature applies to both intra- and cross-zonegroup replication policies, aligning with AWS behavior after the deprecation of their ExistingObjectReplication option.

Fixes: https://tracker.ceph.com/issues/69159

@cbodley
Copy link
Contributor

cbodley commented Dec 10, 2024

@clwluvw i feel like this is a very significant change to the semantics of multisite replication overall, which currently tries very hard to guarantee that replicated buckets end up with the same contents on each zone. the only exception i'm aware of is that changes to 'sync policy' won't trigger a full sync (discussed in https://tracker.ceph.com/issues/57489 but never finished)

what would this mean for disaster recovery? if you add a new zone to the zonegroup, don't you want it to full sync everything so it becomes a real replica?

This allows optimizing performance by skipping the full sync for buckets with a large number of objects, focusing only on changes going forward.

if there's a large bucket but only the new objects are interesting, maybe those should go in a new replicated bucket instead? by separating the replicated bucket from the unreplicated one, we can still guarantee that all zones converge to the same result for the new bucket

cc @smanjara who's been working on bucket deletion in #53799. bucket deletion is very hard to reason about in a model where buckets can have different contents on each zone. this is why we're proposing changes to sync policy to avoid those cases and force replication to be 'symmetric'. i have a strong intuition that rgw multisite should treat "bucket" as a consistent dataset that, when replicated, eventually looks the same to any zone in its zonegroup

@github-actions
Copy link

This pull request can no longer be automatically merged: a rebase is needed and changes have to be manually resolved

@clwluvw
Copy link
Member Author

clwluvw commented Dec 10, 2024

@clwluvw i feel like this is a very significant change to the semantics of multisite replication overall, which currently tries very hard to guarantee that replicated buckets end up with the same contents on each zone.

That's true. I'd love to dive deeper into how we want to approach multi-zone and multi-zonegroup objectives. So far, these behaviors have been controlled through configuration, so I was hoping it wouldn't cause any breakages, allowing the admin to decide on the replication strategy. Perhaps this could be handled within the zonegroup feature, rather than being a daemon-level configuration.

the only exception i'm aware of is that changes to 'sync policy' won't trigger a full sync (discussed in https://tracker.ceph.com/issues/57489 but never finished)

Right, that doesn't trigger a CR to process the entry. However, you'd need to run bucket sync run afterward or write an object to start the full sync process. This could be improved by making bucket sync run operate asynchronously.

what would this mean for disaster recovery? if you add a new zone to the zonegroup, don't you want it to full sync everything so it becomes a real replica?

That brings us back to the earlier question: Do we view bucket replication as a disaster recovery feature or just a one-time replication tool? In AWS, it's designed such that replication works one-way. If you delete a replicated object, it doesn't sync the deletion back to the source unless you have a sync policy pointing back to the source in the destination bucket. I think both use cases exist—some want symmetric replication between buckets across regions for active-active use, while others simply want to use this as a data flow to replicate data to another bucket and then modify or delete it without impacting the source bucket. This is especially true when multiple rules with filters are in place.

This allows optimizing performance by skipping the full sync for buckets with a large number of objects, focusing only on changes going forward.

if there's a large bucket but only the new objects are interesting, maybe those should go in a new replicated bucket instead? by separating the replicated bucket from the unreplicated one, we can still guarantee that all zones converge to the same result for the new bucket

From a UX perspective, users might not like the idea of managing multiple buckets. They typically prefer using policies to control which objects get replicated and want the ability to enable or disable replication on demand.

cc @smanjara who's been working on bucket deletion in #53799. bucket deletion is very hard to reason about in a model where buckets can have different contents on each zone. this is why we're proposing changes to sync policy to avoid those cases and force replication to be 'symmetric'. i have a strong intuition that rgw multisite should treat "bucket" as a consistent dataset that, when replicated, eventually looks the same to any zone in its zonegroup

right, that is something I tried to cover in my PR (#59911) heavily especially in the trimming process to cover both use-cases by introducing new termonologies like (be9ad44 / ac25a6b / 1c7366c / aa5e119). I'm looking forward to see, if we can hear some feedback there so we can be more aligned on these side PRs as well.
@cbodley Perhaps we can have another round of discussion in the upcoming rgw-weekly if you may.

@cbodley
Copy link
Contributor

cbodley commented Dec 10, 2024

yeah, happy to discuss further

Do we view bucket replication as a disaster recovery feature or just a one-time replication tool? In AWS, it's designed such that replication works one-way.

for cross-bucket replication, i could see it skipping objects written before the replication policy was set. i see that replication policy had a flag for ExistingObjectReplication that aws no longer supports, so we should probably match that behavior. but afaik, rgw_data_sync.cc treats both cases (normal data sync and bucket replication policy) the same way

@clwluvw
Copy link
Member Author

clwluvw commented Dec 10, 2024

Do we view bucket replication as a disaster recovery feature or just a one-time replication tool? In AWS, it's designed such that replication works one-way.

for cross-bucket replication, i could see it skipping objects written before the replication policy was set. i see that replication policy had a flag for ExistingObjectReplication that aws no longer supports, so we should probably match that behavior.

Right, for replicating existing objects you need to kinda apply a job (https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-batch-replication-batch.html) which this PR currently tries to simulate by setting the status to full and either let the admin to do bucket sync run or by nature when a new object is written it starts the full-sync. (but definitely, this is not a nice solution as it would block the replication of new objects until it could get the full sync - I mean it doesn't run in parallel).

but afaik, rgw_data_sync.cc treats both cases (normal data sync and bucket replication policy) the same way

This is something I tried to cover in #59911 (mostly in the logging phase) and was looking at this as a ~completion to achieve the AWS model with it.

flags:
- startup
with_legacy: true
- name: rgw_data_sync_start_full_sync
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for cross-bucket sync driven by a bucket replication policy, i agree that we shouldn't require full sync. skipping it by default would be compatible with aws, but that would change rgw's default behavior so we probably want a config option or zonegroup feature to allow admins to opt out of full sync. as a zonegroup feature, we could enable that aws behavior by default for new zones/zonegroups but existing deployments would have to enable manually

as discussed previously, i don't think we should allow skipping full syncs for normal bucket sync because that subverts disaster recovery and breaks symmetry within the zonegroup

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we should allow skipping full syncs for normal bucket sync because that subverts disaster recovery and breaks symmetry within the zonegroup

That is the part I still struggle to distinguish a little bit :)
Can you please elaborate a little more? do you mean if the source was within the same zonegroup we should disregard the feature and go for full sync?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm referring to 'normal bucket sync' where the destination bucket is the same as the source bucket, just on another zone in the zonegroup. the kind you get by default in multisite without any bucket replication policy

i've been arguing that this replication should always be symmetric within the zonegroup, and for deprecation of features like sync policy and 'bucket sync disable' that break this symmetry. but this pr adds a new way to break the symmetry by skipping full sync

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this to the zonegroup feature. Also i've considered the normal bucket sync to disregard the feature and force the full-sync.

cout << " bucket sync checkpoint poll a bucket's sync status until it catches up to its remote\n";
cout << " bucket sync disable disable bucket sync\n";
cout << " bucket sync enable enable bucket sync\n";
cout << " bucket sync init initialize bucket sync indicated by --state flag\n";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you say more about the use case for bucket sync init --state incremental?

the purpose of bucket sync init is to restore consistency of a bucket that's either stuck behind or missing objects due to a bug or misconfiguration

if we skip the full sync, bucket sync init is just going to advance the incremental sync status markers to the end of the remote's bilogs. this means that it'll only replicate new changes that happen after the command runs. is that really the intent? that's very different from the existing command

if we really want to support this, i'd suggest naming it something completely different like bucket sync skip-existing. but again, this feature would subvert disaster recovery and break symmetry

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I just wanted to re-use the existing code so I just added the state as an input here to extend. I guess we can name it something general like bucket sync reset-state --state {state} so we let admin switch from states as wished for now. what do you think?

but again, this feature would subvert disaster recovery and break symmetry

Is this something that again you think we should control by checking the source within the same zonegroup or are you referring to some other scenario? as we discussed the purpose of this PR is to subvert from DR and break symmetry but perhaps only when the source zone is in a different zonegroup?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I just wanted to re-use the existing code so I just added the state as an input here to extend. I guess we can name it something general like bucket sync reset-state --state {state} so we let admin switch from states as wished for now. what do you think?

i don't think --state stopped is a sensible thing to support. bucket sync disable is the correct way to disable logging and force all zones to transition to the stopped state. if we just set the sync status to stopped, i'm guessing that data sync will just transition back to full sync:

    if (syncstopped) {
      // transition to StateStopped in RGWSyncBucketShardCR. if sync is
      // still disabled, we'll delete the sync status object. otherwise we'll
      // restart full sync to catch any changes that happened while sync was
      // disabled
      sync_info.state = rgw_bucket_shard_sync_info::StateStopped;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's right, thanks. I changed bucket sync init to bucket sync reset-state with only limited states init, full-sync, and incremental-sync. so this way we won't let stopped pass by that path. Also, made sure incremental-sync is not applied to a normal bucket sync scenario (within the zonegroup) to break the symmetry.

if we skip the full sync, bucket sync init is just going to advance the incremental sync status markers to the end of the remote's bilogs. this means that it'll only replicate new changes that happen after the command runs. is that really the intent?

I believe we need to. As the current implementation, full sync doesn't run in parallel to incremental, In case the state was set to full and that is taking so long or was a mistake and admin want to switch back to incremental, there should be a way to do so otherwise it might stuck forever.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise it might stuck forever

as far as i know, the only way it would get stuck forever is if data sync loses track of the datalog entries that would trigger that bucket sync. a bucket sync reset-state wouldn't necessarily help with that either, because it doesn't write the corresponding datalog entries

instead, we have the bucket sync run command for this case. it manually processes the rest of bucket sync, including the transition from full to incremental. unlike the proposed bucket sync reset-state, this command preserves multisite's guarantee of eventual consistency of the bucket across zones

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unlike the proposed bucket sync reset-state

right, but I didn't mean to replace bucket sync run with bucket sync reset-state (maybe the naming is misleading). my intent here is to open a door to the state machine of sync state for the admin to change it (to init, full-sync, and incremental) based on whatever factor he thinks is required and then eventually run the sync manually with bucket sync run or let the normal sync make it.
So the situations are:

  • init: this is where we want to clear the state to restore consistency of a bucket.
  • full-sync: here is where the feature is enabled but we want to simulate the Batch Replication. For this maybe we should only consider the current state is incremental before the transition.
  • incremental: this is where the bucket is in full sync but this is taking so long or was a mistake (stuck was bad wording, I agree :) ) and the admin wants to revert it back to incremental (with respect to "normal bucket replication within zonegroup").

Can you share your thoughts on them? should we introduce each as a different command or you think we should not support some of them?

@clwluvw clwluvw force-pushed the bucket-full-sync-start branch from 221ba37 to e29aa67 Compare December 13, 2024 21:15
@clwluvw clwluvw requested a review from a team as a code owner December 13, 2024 21:15
Copy link
Contributor

@anthonyeleven anthonyeleven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs lgtm

Comment on lines 72 to 80
cross-zonegroup-sync-inc
~~~~~~~~~~~~~~~~~~~~~~~~

This feature enables bucket replication between zonegroups to start incrementally,
rather than performing a full sync of existing objects.
When enabled, replication will only apply to changes and new objects created after
replication is enabled. This is especially useful for buckets with a large number
of existing objects, where you may not want to wait for a full sync to complete.
It reduces the initial load and accelerates the start of the replication process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this behavior should apply to bucket replication policy in general, whether it's same- or cross-zonegroup

worth mentioning that this is the aws default for bucket replication policy after their deprecation of ExistingObjectReplication. that's why it's enabled by default on new deployments, but requires existing deployments to opt in

because aws uses the term ExistingObjectReplication, maybe we should call this feature something like skip-existing-object-replication-policy?

Copy link
Member Author

@clwluvw clwluvw Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm referring to 'normal bucket sync' where the destination bucket is the same as the source bucket, just on another zone in the zonegroup. the kind you get by default in multisite without any bucket replication policy
i've been arguing that this replication should always be symmetric within the zonegroup, and for deprecation of features like sync policy and 'bucket sync disable' that break this symmetry.

this behavior should apply to bucket replication policy in general, whether it's same- or cross-zonegroup

ok so I guess that was my misunderstanding from the latter quote, you meant we should deprecate/block PutBucketReplication API calls or radosgw-admin bucket sync group commands followed by --bucket when it's about the same zonegroup (whether the bucket is the same or the zone) so we can always guarantee the symmetry between zones in a zonegroup. or any other way of deprecation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we find a way to make this specific to bucket replication policy instead of cross-zonegroup?

In that case, how are we going to tackle the existing policies pointing to the same zonegroup but with different zones or filters? are we going to release a breaking change in favor of the new cross-zonegroup feature or am I confused?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when i talk about symmetry between zones, i'm referring to the contents of a specific bucket. in the absense of sync policy or bucket replication policy, a given bucket should (eventually) have the same contents on each zone

bucket replication policy doesn't break this symmetry unless the source and destination bucket are the same. so i support your commit 888ddcb to reject such policies 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've been slow to draft a design, but my general idea for cross-zonegroup replication is to add new primitives to enable/allow/forbid the sync relationships between zonegroups, similar to how existing sync policy treats zones

then we could deprecate the existing zone-level sync policy and other features that break per-bucket symmetry (like 'bucket sync disable' and our Zone extension to PutBucketReplication)

Comment on lines 2606 to 2608
bool should_full_sync(const RGWBucketInfo& source_info) const override {
// always init with full sync when the bucket is in my zonegroup (force symmetry within the zonegroup)
return source_info.zonegroup == my_zonegroup || !start_inc;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we find a way to make this specific to bucket replication policy instead of cross-zonegroup?

RGWRunBucketSourcesSyncCR may spawn several RGWSyncBucketCRs. some of those may correspond to cross-bucket replication from policy, but one should be the same-bucket replication that we run by default for disaster recovery. i think we need to a way to control full sync differently for those two cases

so i don't think RGWDefaultSyncModuleInstance::should_full_sync() is the right place for this check, unless it can take should_full_sync(source_info, dest_info) and compare the buckets for equality. however, i think rgw supports bucket replication policy that names the same source bucket as the destination - do you know if aws allows that?

Copy link
Member Author

@clwluvw clwluvw Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you know if aws allows that?

No it doesn't, I had that check proposed here 888ddcb.

Basically, currently, the way the pipes are considered by RGWGetBucketPeersCR are two types:

  • a sync policy (either on the bucket or zonegroup - bear in mind even without any sync policy on zonegroup level RGWBucketSyncPolicyHandler() will create a default sync policy
    if (sync_policy.empty()) {
    RGWSyncPolicyCompat::convert_old_sync_config(zone_svc, sync_modules_svc, &sync_policy);
    legacy_config = true;
    }
    ) containing no dest bucket results in RGWBucketSyncFlowManager::reflect() to reflect a policy pointing to the same bucket in the current RGW's zone running sync.
  • a hint created by RGWSI_Bucket_Sync_SObj::handle_bi_update() when a sync policy is created for a bucket pointing to that bucket.

So in all cases, we always deal with sync policies and distinguish between "bucket replication" or "within zonegroup - between zone replication) is kinda tricky now but it depends on what approach we want to take in the discussion above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it doesn't, I had that check proposed here 888ddcb.

okay, i think that helps. if we enforce that source != dest for new policy, then should_full_sync(source_info, dest_info) should be sufficient. we'll continue doing full sync for existing policy where they're equal, and the full sync of new policy will depend on the zonegroup feature. does that make sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given that we deprecated the only sync module that overrode should_full_sync(), i'm skeptical that we want to leave this behavior up to the sync module interface. we might just hard-code that bucket comparison in/under RGWRunBucketSourcesSyncCR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll continue doing full sync for existing policy where they're equal, and the full sync of new policy will depend on the zonegroup feature. does that make sense?

Do we see a sync policy with src_bucket == dest_bucket but carrying filters (prefixes and/or tags) symmetry or not?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we see a sync policy with src_bucket == dest_bucket but carrying filters (prefixes and/or tags) symmetry or not?

if it results in different contents on each zone, then it's not symmetric. but the reason to continue doing full sync in this case is for backward compatiblity, not symmetry

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, let me break it down here:

  1. We reject policy creation (through API and radosgw-admin) for a bucket that results in replicating to a bucket with the same name (here we don't care about zonegroup as cross-zonegroup logically we can't so both cases should be rejected).
  2. we would change how the sync policy on the zonegroup level would work. basically, it should only control the cross-zonegroup replication not zones within the zonegroup - otherwise, that could break symmetry within the zonegroup (like having policies "enabling" replication of any source to a single bucket with a different name or stuff like that).
  3. at this point we can say the sync policy that RGWGetBucketPeersCR() gives us is either pointing to the same bucket name with different zones in its zonegroup or a different bucket name (here doesn't matter if zonegroup is the same - although we don't support SRR yet so perhaps we need to return HTTP 501 on policy creation for that).
  4. So we do if src_bucket == dest_bucket then it's symmetry and ignores the zonegroup feature and relies on should_full_sync() else consider zonegroup feature with should_full_sync().

Did I miss anything?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice summary, i think we're on the same page!

@clwluvw clwluvw force-pushed the bucket-full-sync-start branch 3 times, most recently from aa51124 to e7264e5 Compare December 18, 2024 21:56
clwluvw added a commit to clwluvw/ceph that referenced this pull request Dec 19, 2024
As part of the recent update in ceph#61003,
a new command `bucket sync batch-replicate` has been introduced to allow
a bucket to perform batch replication when it is initially set to start
with incremental replication.

Additionally, a `bucket sync skip-full-sync` command has been introduced,
enabling the switch from full sync to incremental sync. However, this
command applies only to buckets where the source and destination names
differ, as symmetry must be maintained when the names are the same.

Fixes: https://tracker.ceph.com/issues/69309
Signed-off-by: Seena Fallah <[email protected]>
@clwluvw
Copy link
Member Author

clwluvw commented Dec 19, 2024

Admin controls have been taken into #61139

@github-actions
Copy link

github-actions bot commented Jan 8, 2025

This pull request can no longer be automatically merged: a rebase is needed and changes have to be manually resolved

Introduced a "skip-existing-object-replication-policy" feature,
allowing bucket replication to start incrementally for new changes
and objects, bypassing a full sync of existing objects. The feature
applies to both intra- and cross-zonegroup replication policies,
aligning with AWS behavior after the deprecation of their
ExistingObjectReplication option.

Fixes: https://tracker.ceph.com/issues/69159
Signed-off-by: Seena Fallah <[email protected]>
@clwluvw clwluvw force-pushed the bucket-full-sync-start branch from e7264e5 to 35557bb Compare January 8, 2025 18:45
clwluvw added a commit to clwluvw/ceph that referenced this pull request Jan 8, 2025
As part of the recent update in ceph#61003,
a new command `bucket sync batch-replicate` has been introduced to allow
a bucket to perform batch replication when it is initially set to start
with incremental replication.

Additionally, a `bucket sync skip-full-sync` command has been introduced,
enabling the switch from full sync to incremental sync. However, this
command applies only to buckets where the source and destination names
differ, as symmetry must be maintained when the names are the same.

Fixes: https://tracker.ceph.com/issues/69309
Signed-off-by: Seena Fallah <[email protected]>
@github-actions
Copy link

github-actions bot commented Mar 9, 2025

This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days.
If you are a maintainer or core committer, please follow-up on this pull request to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@github-actions github-actions bot added the stale label Mar 9, 2025
@clwluvw clwluvw removed the stale label Mar 13, 2025
When setting up shard status in incremental mode, avoid initializing
with the max marker. Doing so would cause the destination zone to
skip already logged entries.

Instead, initialize with an empty position to allow the destination
zone to query and process all available entries from the start of
the incremental phase.

Signed-off-by: Seena Fallah <[email protected]>
@clwluvw clwluvw force-pushed the bucket-full-sync-start branch from 536e330 to b77e115 Compare April 7, 2025 14:16
@github-actions
Copy link

This pull request can no longer be automatically merged: a rebase is needed and changes have to be manually resolved

@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days.
If you are a maintainer or core committer, please follow-up on this pull request to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@github-actions github-actions bot added the stale label Jun 28, 2025
@github-actions
Copy link

This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution!

@github-actions github-actions bot closed this Jul 28, 2025
@clwluvw clwluvw reopened this Jul 28, 2025
@clwluvw clwluvw removed the stale label Jul 28, 2025
@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days.
If you are a maintainer or core committer, please follow-up on this pull request to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@github-actions github-actions bot added the stale label Sep 26, 2025
@clwluvw clwluvw removed the stale label Oct 6, 2025
@github-actions
Copy link

github-actions bot commented Dec 5, 2025

This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days.
If you are a maintainer or core committer, please follow-up on this pull request to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@github-actions github-actions bot added the stale label Dec 5, 2025
@github-actions
Copy link

github-actions bot commented Jan 4, 2026

This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution!

@github-actions github-actions bot closed this Jan 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants