refactor(storage): Migrate conformance and system tests to StorageTransport with enhanced retry logic#7
refactor(storage): Migrate conformance and system tests to StorageTransport with enhanced retry logic#7thiyaguk09 wants to merge 29 commits intostorage-node-18from
Conversation
All 720 test cases have been fixed, and the code has been refactored.
- Replace high-level bucket and file calls with storageTransport.makeRequest. - Fix Scenario 1 failures by implementing "create-or-get" logic for buckets. - Resolve metageneration mismatch in lock() by dynamically fetching metadata. - Normalize header keys to lowercase in transport response processing. - Increase unit test coverage for shouldRetry logic and error handling.
- Fixed authentication headers/token exchange in the transport layer. - Reverted to single-shot resumable upload to isolate Scenario 7 failures while debugging mid-stream offset recovery.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the internal conformance tests by migrating them to use the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request refactors Google Cloud Storage conformance tests to directly use StorageTransport.makeRequest for API calls, bypassing higher-level client methods, and updates the testbench Docker image. It also removes the docs CI job and related settings. Review comments highlight a bug in the create function's bucketExists check, where a 404 error was not handled, causing premature exit, and suggest an optimization to reuse the Gaxios instance in the authClient.request mock for efficiency.
| const gaxios = new Gaxios(); | ||
| return gaxios.request(opts); |
There was a problem hiding this comment.
Creating a new Gaxios instance for every authClient.request call within this mock can be inefficient, especially if this mock is called frequently. While this is a test utility, reusing a single Gaxios instance might offer a slight performance improvement and better reflect how Gaxios is typically used in production code.
Consider initializing gaxios once outside the request function and reusing it.
authClient.request = async opts => {
const gaxios = new Gaxios();
return gaxios.request(opts);
};111ddca to
5ff1473
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request refactors the storage library to improve conformance testing, ensure Node 18 compatibility, and enhance retry logic with granular idempotency checks. Key updates include transitioning to full REST API paths and refactoring conformance tests to use raw API calls. Reviewers identified a bug in a setTimeout callback and a potential crash in File.isPublic due to unnormalized callbacks. Additionally, feedback suggests refining the decompress option handling in StorageTransport, avoiding response data pollution when attaching metadata, and optimizing performance by reducing unnecessary JSON parsing during retry evaluations.
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| ...({decompress: false} as any), |
There was a problem hiding this comment.
Hardcoding decompress: false here overrides any decompress option passed in reqOpts. While this might be intended to avoid Gaxios's automatic decompression in favor of manual handling, it prevents users from explicitly requesting decompression for non-stream requests or when they want Gaxios to handle it. Consider only setting it to false if not explicitly provided in reqOpts.
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | |
| ...({decompress: false} as any), | |
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | |
| decompress: (reqOpts as any).decompress ?? false, |
| if (data && typeof data === 'object') { | ||
| data.headers = resp.headers; | ||
| data.status = resp.status; | ||
| } |
There was a problem hiding this comment.
Attaching headers and status directly to the data object can have unexpected side effects, especially if data is an Array, a Buffer, or a frozen object. It also pollutes the response data with metadata that might conflict with actual data fields. A cleaner approach would be to return a structured object or ensure data is a plain object before modification.
| // Optimized Precondition Check | ||
| let bodyEtag = false; | ||
| try { | ||
| const parsedBody = typeof data === 'string' ? JSON.parse(data) : data; |
There was a problem hiding this comment.
Attempting to JSON.parse(data) on every retryable error where data is a string can be a significant performance bottleneck if the request body is a large non-JSON string (e.g., a large file upload). Consider checking if the request was intended to be JSON or limiting the size of the string being parsed.
5ff1473 to
585bbb5
Compare
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕