Closed
Conversation
split executor from graphql-js
slightly more cross platform
will fix with deprecation of getOperationRootType
The codebase should refer to functions that execute the query, mutation, and/or subscription root fields as such, rather than as functions that execute the operations themselves. Executing a query or mutation returns a map of data and errors, while executing the root fields returns the data of the root fields.
The `subscribe` method calls `createSourceEventStream` and `execute`, each of which normalize the request properties separately. By normalizing requests within the `subscribe` function, we can pass the normalized arguments to the implementation for `createSourceEventStream` and `execute`, i.e. new functions `createSourceEventStreamImpl` and `executeQueryOrMutation`.
The function is used only by `buildExecutionContext` and can be called there rather than exported
...into the same file!
* Move non-class Executor functions to end of file * Move memoized collectSubFields adjacent to class functions * Add buildPerPayloadExecutionContext function * Add main exports to executor ...in preparation for wrapping as class * Refactor Executor as class
* Export internal Executor class for experimentation This class, similar to the Parser class, is exported only to assist people in implementing their own executors without duplicating too much code and should be used only as last resort for cases such as experimental syntax or if certain features could not be contributed upstream. It is still part of the internal API (so far), so any changes to it are never considered breaking changes. The `graphql-executor` package must therefore be pinned to a specific patch version, e.g. `0.0.4`. * Fix README
to speed up ci, benchmarks irrelevant to execution can be retired
to reference graphql-executor instead of graphql-js
yaacovCR
added a commit
that referenced
this pull request
Mar 18, 2022
extracted from #154 relies on bundlers that can be used for batched streaming as well
extracted from #154 relies on bundlers that can be used for batched streaming as well
when using `@stream`, the `data` property will be an array only when batching. when/if graphql-js changes to always send data as an array for streamed results, graphql-executor will do so as well. until then, data will be an array only when necessary, i.e. when batch streaming
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* fix batched parallel stream error with defer where the initial responseNode was not added to the list using identical helpers for all bundlers makes this code better tested all around * add changeset
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
the Publisher and Bundler abstractions are such core parts of the execution algorithm that they probably belong within the execution folder (following the pattern of the map/flatten asyncIterable helpers)
to distinguish a single null item versus a null item in a list of non-nullable items where the null bubbles up to the list itself
...rather than just AsyncIterables to facilitate chunking
…ayloads rather than just a single payload
5b32993 to
63873a9
Compare
Owner
Author
Defer with fragmentsOperation:query HeroNameQuery {
hero {
id
...TopFragment @defer(label: "DeferTop")
}
}
fragment TopFragment on Hero {
name
...NestedFragment @defer(label: "DeferNested")
}
fragment NestedFragment on Hero {
friends {
name
}
}Response: |
Owner
Author
Defer with slow field in initial payloadOperation:query HeroNameQuery {
hero {
id
...NameFragment @defer
}
}
fragment NameFragment on Hero {
slowField
friends {
...NestedFragment @defer
}
}
fragment NestedFragment on Friend {
name
}Response:Results in two sets of payloads. |
Owner
Author
StreamOperation:{
scalarList @stream
}OR {
scalarList @stream(initialCount: 1)
}Response: |
Owner
Author
Stream with chunks of greater than 1Operation:{
scalarList @stream(maxChunkSize: 2)
}Response:The first stream chunk hits its max. |
Owner
Author
Stream in correct order even when the first item is slowOperation:query {
asyncSlowList @stream(initialCount: 0) {
name
id
}
}Response:Note that a slow item 1 will move all the items to the second set of payloads. |
Owner
Author
Stream in parallelOperation:query {
asyncSlowList @stream(initialCount: 0, maxChunkSize: 1, inParallel: true) {
name
id
}
}Response:When streaming in parallel, the first set of payloads includes everything except the slow item 1. |
Owner
Author
Stream with asyncIterableListOperation:query {
asyncIterableList @stream {
name
id
}
}OR query {
asyncIterableList @stream(maxChunkSize: 1) {
name
id
}
}OR query {
asyncIterableList @stream(maxChunkSize: 2, maxInterval: 1) {
name
id
}
}Response:With AsyncIterables, each item will have to be in its own set of payloads, unless chunked. (The default maxChunkSize is 1.) But, even if chunked, if the interval elapses, it is sent separately before the chunk maxes out. So, a maxChunkSize greater than 1 could still end up sending smaller payloads if the maxInterval is exceeded. |
Owner
Author
Stream with asyncIterableList with chunk size > 1Operation:query {
asyncIterableList @stream(maxChunkSize: 2) {
name
id
}
}Response: |
Owner
Author
Stream where a non-nullable item returns nullOperation:query {
asyncIterableNonNullError @stream(initialCount: 1) {
name
}
}Response: |
Owner
Author
Owner
Author
|
Returning to simple fork. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Some of these were discussed during the Feb 2022 WG, some are new, possibly to be discussed in March :)
Execution:
= resolver functions for list fields should return AsyncIterables of Iterables, rather than just AsyncIterables to allow chunking
= change AsyncGenerator returned by
executeto return an iterable of all available payloads instead of just the next payload@stream:= change stream payload to return an array for data, new
atIndexproperty to specify the initial index for the array= stream payloads currently act as error boundaries (as opposed to borking the entire stream/future payloads)
= add
maxChunkSizeInt argument for streaming in chunks= add 'maxInterval' Int argument to send chunks before
maxChunkSizeifmaxIntervalms have elapsed= add 'inParallel' Boolean argument to allow sending items whenever ready, uses 'atIndices' instead of 'atIndex' to specify the indices of what is being returned
Related:
graphql/defer-stream-wg#32
graphql/defer-stream-wg#23
graphql/defer-stream-wg#17