You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 7, 2026. It is now read-only.
The bigQuery.createQueryStream seems to load an entire set of data into memory before the stream starts actually piping data into the next streams.
Environment details
OS: MacOS 12.1
Node.js version: 14.18.1
npm version: 6.14.15
@google-cloud/bigquery version: 5.10.0
Steps to reproduce
Using this test script I can see over 300mb of data is loaded into memory before the stream starts piping to the next streams. And I am only selecting one column, so this is a lot of records in that case.
If I log each entry in the transform stream it also seems to come into batches. It pauses for a while and suddenly starts piping again. This makes me think internally a whole page is loaded into memory and then piped to the readable stream, but this might not be the issue.
The
bigQuery.createQueryStreamseems to load an entire set of data into memory before the stream starts actually piping data into the next streams.Environment details
@google-cloud/bigqueryversion: 5.10.0Steps to reproduce
Using this test script I can see over 300mb of data is loaded into memory before the stream starts piping to the next streams. And I am only selecting one column, so this is a lot of records in that case.
If I log each entry in the transform stream it also seems to come into batches. It pauses for a while and suddenly starts piping again. This makes me think internally a whole page is loaded into memory and then piped to the readable stream, but this might not be the issue.