Skip to content

Make Bucket.delete() in storage more restricted / opinionated #564

@dhermes

Description

@dhermes

UPDATED 1/26/14 by @dhermes

After discussion with @jgeewax the approach will be

  • Cap at 512 object deletes (or similar number) and cowardly refuse if more objects exist
  • Don't try to deal with retries or 409s, just raise the error and give info
  • RE: eventual consistency, just ignore (maybe log?) 404s for object deletes

See @thobrla comment:

...If you provide this functionality, it must be made extremely clear that this is a
non-atomic operation that is not expected to succeed if there are concurrent writers to
the bucket. The danger here is providing the illusion of atomicity to the client, especially
because that illusion is likely to work at small scale and then fail cryptically at large scale.

Metadata

Metadata

Assignees

Labels

api: storageIssues related to the Cloud Storage API.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions