Improve handling of arrays in @defer and @stream payloads#12923
Improve handling of arrays in @defer and @stream payloads#12923jerelmiller merged 29 commits intorelease-4.1-baselinefrom
@defer and @stream payloads#12923Conversation
🦋 Changeset detectedLatest commit: 059426e The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
✅ Docs preview has no changesThe preview was not built because there were no changes. Build ID: 65998f1ed1f52cf86e6a07cb |
commit: |
| @@ -0,0 +1,5 @@ | |||
| --- | |||
| "@apollo/client": minor | |||
There was a problem hiding this comment.
I'm marking this as a minor change. While it does fix the array merging behavior, it's a big enough difference to warrant a minor.
| expect(outgoingRequestSpy).toHaveBeenCalledTimes(2); | ||
| }); | ||
|
|
||
| it.each([["cache-first"], ["no-cache"]] as const)( |
There was a problem hiding this comment.
These were added from #11374 (with some tweaks)
| expect(outgoingRequestSpy).toHaveBeenCalledTimes(2); | ||
| }); | ||
|
|
||
| it.each([["cache-first"], ["no-cache"]] as const)( |
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (existing = [], incoming, { field }) => { |
There was a problem hiding this comment.
Should we consider exposing a utility function or helper of some kind to do this kind of thing? At the very least, we probably want to put something in our docs along with @stream that describes how to do this in your own code. This demonstrates that you can maintain the original cached items when using stream instead of default overwriting the array.
There was a problem hiding this comment.
Let's document it first and figure out the perfect helper later.
| friendList: [ | ||
| { name: "Luke", id: "1" }, | ||
| { name: "Han", id: "2" }, | ||
| { name: "Leia Cached", id: "3" }, |
There was a problem hiding this comment.
This behavior will now rely on a merge function to keep the cached item if the user wishes.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (_, incoming) => incoming, |
There was a problem hiding this comment.
I added this here to silence the warning about data loss, but I think this might happen a lot with @stream since fetches against a @stream field will naturally return a shorter list than already-cached lists since the list is streamed in over time.
Should we consider muting the warning for @stream fields like we do for @defer? Or should we recommend adding these type policies to explicitly opt into the @stream behavior?
There was a problem hiding this comment.
I'm more confused as why this would warrant a data loss warning in the first place, at least with the output we have here. It's just overwriting a reference array with a shorter reference array, without any real data loss?
console.warn
Cache data may be lost when replacing the friendList field of a Query object.
This could cause additional (usually avoidable) network requests to fetch data that were otherwise cached.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.friendList field, so InMemoryCache can safely merge these objects:
existing: [
{ __ref: 'Friend:1' },
{ __ref: 'Friend:2' },
{ __ref: 'Friend:3' },
[length]: 3
]
incoming: [ { __ref: 'Friend:1' }, [length]: 1 ]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
There was a problem hiding this comment.
Discussed in person. We want to silence the warning if the field has an @stream directive on it, otherwise every @stream field will require a field policy which feels broken.
There was a problem hiding this comment.
Let's put this on the table to discuss when you're back. Discovered something in the comments that make me think this might make sense. See
apollo-client/src/cache/inmemory/writeToStore.ts
Lines 863 to 864 in 6914a97
Seems custom merge functions are always expected for array fields (something we should at least document). Again, let's discuss when you're back to get a final decision. For now I'm going to leave this as-is.
| export class DeepMerger<TContextArgs extends any[] = any[]> { | ||
| constructor( | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs> | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs>, |
There was a problem hiding this comment.
Since this is an internal class, I know we are free to update the API so with that in mind... should reconciler be moved to options instead? Its a bit awkward in places where we need arrayMerge but not reconciler since you have to pass undefined as the first argument.
Thoughts on changing it so we can do:
new DeepMerger({ arrayMerge: 'truncate' });?
There was a problem hiding this comment.
We use reconciler exactly once in the codebase, so it won't change things around a lot. What do you think about just making it an option?
d071a61 to
096db02
Compare
2a2fbc5 to
9b9d8aa
Compare
096db02 to
b05df03
Compare
b41de5f to
c496039
Compare
| export class DeepMerger<TContextArgs extends any[] = any[]> { | ||
| constructor( | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs> | ||
| private reconciler: ReconcilerFunction<TContextArgs> = defaultReconciler as any as ReconcilerFunction<TContextArgs>, |
There was a problem hiding this comment.
We use reconciler exactly once in the codebase, so it won't change things around a lot. What do you think about just making it an option?
| if (isNumericKey) { | ||
| arrayMerge = "combine"; | ||
| } |
There was a problem hiding this comment.
This feels like a workaround that might explode into our face in some way at some point - e.g. when we'd want to truncate deeply nested, but a parent is an array.
Alternative suggestion: WDTY about adding an atPath option into DeepMerger?
The arrayMerge strategy should then only apply at the exact level where the new source is being inserted, with everything else always being treated as combine - and we would also save creating all these deeply nested objects just for the sake of merging, which will be thrown away after the merge anyways.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (_, incoming) => incoming, |
There was a problem hiding this comment.
I'm more confused as why this would warrant a data loss warning in the first place, at least with the output we have here. It's just overwriting a reference array with a shorter reference array, without any real data loss?
console.warn
Cache data may be lost when replacing the friendList field of a Query object.
This could cause additional (usually avoidable) network requests to fetch data that were otherwise cached.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.friendList field, so InMemoryCache can safely merge these objects:
existing: [
{ __ref: 'Friend:1' },
{ __ref: 'Friend:2' },
{ __ref: 'Friend:3' },
[length]: 3
]
incoming: [ { __ref: 'Friend:1' }, [length]: 1 ]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
| { __typename: "Friend", id: "2", name: "Han" }, | ||
| { __typename: "Friend", id: "3", name: "Leia" }, | ||
| ], | ||
| friendList: [{ __typename: "Friend", id: "1", name: "Luke (refetch)" }], |
There was a problem hiding this comment.
Tbh, for a refetch it feels as if it should be merged, not truncated. Or even better, truncated on the last chunk, not the first one. But I don't have a good idea on how to make that distinction.
There was a problem hiding this comment.
For now I'm going to leave this, but let's make a final decision when you get back.
| Query: { | ||
| fields: { | ||
| friendList: { | ||
| merge: (existing = [], incoming, { field }) => { |
There was a problem hiding this comment.
Let's document it first and figure out the perfect helper later.
b05df03 to
c785cc5
Compare
c8c950b to
dd5eccb
Compare
Co-authored-by: Lenz Weber-Tronic <[email protected]>
Fixes #12660
Fixes #11704
Fixes issues where arrays returned in
@deferpayloads would maintain cached items when the defer array was shorter than the cached array. This change also affects@streamarrays so that when a@streamchunk is processed, it truncates the array to the length of the stream payload. This ensures the length of the final result equals the length of the server array, much like it would if a plainrefetchwere issued.