Skip to content

Add a small cache of AbstractSyntaxContext#336

Merged
rchande merged 3 commits intodotnet:masterfrom
rchande:ContextCaching
Feb 12, 2015
Merged

Add a small cache of AbstractSyntaxContext#336
rchande merged 3 commits intodotnet:masterfrom
rchande:ContextCaching

Conversation

@rchande
Copy link
Contributor

@rchande rchande commented Feb 9, 2015

AbstractSymbolCompletionProvider computes a SyntaxContext and passes
it around while computing CompletionItems. Many CompletionProviders
derive AbstractSymbolCompletionProvider in order to share the logic for
linked files completion. This means that we'll repeatedly compute the
same SyntaxContext for a given Document and position. This shows up in
the 2-core typing tests. To avoid this, we'll simply cache the most
recent Document/position/SyntaxContext and use it if it matches the
current request.

I'm not sure how to best address the linked files case--currently, each provider iterates over all of the files, which means the one-element cache will never contain the right Document for the next provider. I suppose to the cache could contain more elements, but how many?

AbstractSymbolCompletionProvider computes a SyntaxContext and passes
that around while computing CompletionItems. Many CompletionProviders
derive AbstractSymbolCompletionProvider in order to share the logic for
linked files completion. This means that we'll repeatedly compute the
same SyntaxContext for a given Document and position. This shows up in
the 2-core typing tests. To avoid this, we'll simply cache the most
recent Document/position/SyntaxContext and use it if it matches the
current request.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"derive"

@rchande
Copy link
Contributor Author

rchande commented Feb 9, 2015

Want to take a look @pharring @heejaechang?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend doing this:
Create a new private nested class with these three fields. You can then do atomic read and update using Interlocked instructions without requiring a gate. See, for example, what I did in the VB keyword recommender recently (actually, it didn't get checked in in the final version, but I sent out a CodeFlow with the title "[Closed] PERF: VB typing. Cache recommended keywords". Look at Iteration 1)

Once you've done that, it becomes (relatively) easy to extend the cache to more than one item.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For linked files cases, you could grab the set of linked documents (this is a O(1) operation) and keep around caches for each of them. If documents are processed that are in the current set of caches, then just keep using them. If a document is processed that's not in the current set of caches, then clear the cache and re-populate with the new set of documents linked to the requested one (etc.).

This solves the "but how many?" problem posed in the description by using exactly how many will be required (but no more) instead of trying to guess an appropriate constant number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pharring, I believe you're referring src\Features\VisualBasic\Completion\KeywordRecommenders\Types\BuiltInTypesKeywordRecommender.vb, right? I'll follow that pattern.

@dpoeschl That sounds like a good approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rchande Yes, but not as it exists in the current sources. Look at Iteration 1 of the CodeFlow and the _keywordListCache field.

@dpoeschl dpoeschl added Area-IDE Tenet-Performance Regression in measured performance of the product from goals. labels Feb 9, 2015
Keep a Dictionary of Document -> Task<SyntaxContext> and a position.
Every time we start computing items, check if the Documents to compute
against are all keys in this dictionary. If they are, return cached
tasks. Otherwise, clear the dictionary and immediately add the new set
of Documents as keys. As we iterate over the set of Documents to compute
completion for, we update the cache to point to the Task returned by
CreateContext.
@rchande
Copy link
Contributor Author

rchande commented Feb 10, 2015

I like this approach because we don't have to choose a size for the cache--we'll always cache as many SyntaxContexts as there are linked documents.

@Pilchie
Copy link
Member

Pilchie commented Feb 10, 2015

👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract out this code?

@heejaechang
Copy link
Contributor

how about other completion provider? do they calculate syntax context themselves? I think at one point, we went through various completion providers and made them to share same syntax context.

@rchande
Copy link
Contributor Author

rchande commented Feb 10, 2015

Long ago, I tried making the providers share the same context, but it was really messy. I do see a couple other consumers of SyntaxContext. I could push this cache down into AbstractSyntaxContext itself to make sure it's really shared with everything.

rchande added a commit that referenced this pull request Feb 12, 2015
Add a small cache of AbstractSyntaxContext
@rchande rchande merged commit ed42b9e into dotnet:master Feb 12, 2015
dibarbet pushed a commit to dibarbet/roslyn that referenced this pull request Nov 21, 2025
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Area-IDE Tenet-Performance Regression in measured performance of the product from goals.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants