Add a small cache of AbstractSyntaxContext#336
Conversation
AbstractSymbolCompletionProvider computes a SyntaxContext and passes that around while computing CompletionItems. Many CompletionProviders derive AbstractSymbolCompletionProvider in order to share the logic for linked files completion. This means that we'll repeatedly compute the same SyntaxContext for a given Document and position. This shows up in the 2-core typing tests. To avoid this, we'll simply cache the most recent Document/position/SyntaxContext and use it if it matches the current request.
|
Want to take a look @pharring @heejaechang? |
There was a problem hiding this comment.
I would recommend doing this:
Create a new private nested class with these three fields. You can then do atomic read and update using Interlocked instructions without requiring a gate. See, for example, what I did in the VB keyword recommender recently (actually, it didn't get checked in in the final version, but I sent out a CodeFlow with the title "[Closed] PERF: VB typing. Cache recommended keywords". Look at Iteration 1)
Once you've done that, it becomes (relatively) easy to extend the cache to more than one item.
There was a problem hiding this comment.
For linked files cases, you could grab the set of linked documents (this is a O(1) operation) and keep around caches for each of them. If documents are processed that are in the current set of caches, then just keep using them. If a document is processed that's not in the current set of caches, then clear the cache and re-populate with the new set of documents linked to the requested one (etc.).
This solves the "but how many?" problem posed in the description by using exactly how many will be required (but no more) instead of trying to guess an appropriate constant number.
There was a problem hiding this comment.
@rchande Yes, but not as it exists in the current sources. Look at Iteration 1 of the CodeFlow and the _keywordListCache field.
Keep a Dictionary of Document -> Task<SyntaxContext> and a position. Every time we start computing items, check if the Documents to compute against are all keys in this dictionary. If they are, return cached tasks. Otherwise, clear the dictionary and immediately add the new set of Documents as keys. As we iterate over the set of Documents to compute completion for, we update the cache to point to the Task returned by CreateContext.
|
I like this approach because we don't have to choose a size for the cache--we'll always cache as many SyntaxContexts as there are linked documents. |
|
👍 |
There was a problem hiding this comment.
extract out this code?
|
how about other completion provider? do they calculate syntax context themselves? I think at one point, we went through various completion providers and made them to share same syntax context. |
|
Long ago, I tried making the providers share the same context, but it was really messy. I do see a couple other consumers of SyntaxContext. I could push this cache down into AbstractSyntaxContext itself to make sure it's really shared with everything. |
Add a small cache of AbstractSyntaxContext
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
AbstractSymbolCompletionProvider computes a SyntaxContext and passes
it around while computing CompletionItems. Many CompletionProviders
derive AbstractSymbolCompletionProvider in order to share the logic for
linked files completion. This means that we'll repeatedly compute the
same SyntaxContext for a given Document and position. This shows up in
the 2-core typing tests. To avoid this, we'll simply cache the most
recent Document/position/SyntaxContext and use it if it matches the
current request.
I'm not sure how to best address the linked files case--currently, each provider iterates over all of the files, which means the one-element cache will never contain the right Document for the next provider. I suppose to the cache could contain more elements, but how many?