[ty] Remove invalid statement-keyword completions in for-statements#21979
Conversation
Diagnostic diff on typing conformance testsNo changes detected when running ty on typing conformance tests ✅ |
|
| if is_specifying_for_statement_iterable(&parsed, offset, typed.as_deref()) { | ||
| // Remove statement-keywords that can never be valid as we | ||
| // expect an expression. | ||
| completions.retain(|item| { | ||
| let Some(kind) = item.kind else { return true }; | ||
| if kind != CompletionKind::Keyword { | ||
| return true; | ||
| } | ||
| matches!(item.name.as_str(), "await" | "lambda") | ||
| }); | ||
| } |
There was a problem hiding this comment.
It might make more sense to make add_keyword_completions context-aware and avoid adding the incorrect completions to begin with rather than pruning them here after the fact.
There was a problem hiding this comment.
Yeah I'm not quite sure what the right architecture ought to be here. I suppose we could end up ruling out other completions as well.
Maybe having the completion collector "know" what the context is, and then its add methods could choose to ignore completions added that are incompatible with that context. But I think that's a bigger refactor.
| /// | ||
| /// `typed` should be the text immediately before the | ||
| /// provided cursor `offset`. | ||
| fn typed_text_range(typed: Option<&str>, offset: TextSize) -> TextRange { |
There was a problem hiding this comment.
It probably makes sense here to make find_typed_text return a struct that contains both the text and the range so we don't have to make these computations again from the bare string.
There was a problem hiding this comment.
Yeah I think that's a good idea. Doesn't have to be in this PR though.
2f71a24 to
e3f05f5
Compare
In `for x in <CURSOR>` statements it's only valid to provide expressions that eventually evaluate to an iterable. While it's extremely difficult to know if something can evaulate to an iterable in a general case, there are some suggestions we know can never lead to an iterable. Here we remove keywords which can never evaluate to an iterable.
e3f05f5 to
57e711a
Compare
BurntSushi
left a comment
There was a problem hiding this comment.
Thank you! I think this LGTM as-is. I left some comments, but I'm not sure there's anything actionable from them right now.
| /// | ||
| /// `typed` should be the text immediately before the | ||
| /// provided cursor `offset`. | ||
| fn typed_text_range(typed: Option<&str>, offset: TextSize) -> TextRange { |
There was a problem hiding this comment.
Yeah I think that's a good idea. Doesn't have to be in this PR though.
| matches!( | ||
| node, ast::AnyNodeRef::StmtFor(stmt_for) if stmt_for.iter.range().contains_range(range) | ||
| ) | ||
| }) |
There was a problem hiding this comment.
I remain slightly concerned about adding more and more of these "do we match this AST pattern?" checks. I think it's fine for now, but:
- I think eventually this is going to impact perf.
- The "check this case, then this case, and then this case" approach already buckled under its own weight for imports. Now we have a "look at the stuff we have once, and classify our case based on that."
I don't think there's any action item here right now. Mostly just voicing my thoughts.
There was a problem hiding this comment.
Gathering all my thoughts here instead of responding in the various threads.
I share your concerns and in theory I like the approach of building a context (before adding any completions) that has enough information such that we can later determine what completions should be added or ignored. Ignoring that this would be a big refactor, what slightly concerns me with that approach is that the size and the amount of information we would need to capture in the context could get out of hand, in order to support all the different cases where we might want to add or suppress different suggestions. So in practice I'm not quite sure how you would architect that to make it intuitive to work with.
To the extent it's doable, I do agree that we at the very least should stick to something like a single token-based pruning loop and a single AST-traversal pruning loop (I don't see how we can get away without both as long as we do any pruning at all) that either mutate completions in-place or builds some list of predicate functions that can be used to filter completions. This at least gets us away from a whole bunch of individual checks that all do their own AST traversal to remove bad suggestions. I'm not sure this would be quite as simple to do when we add new completions though (without bigger refactors), as that usually requires more work.
At some point a refactor in some direction is probably due though to make this scale nicer (both in terms of performance and readability), but to me it's not obvious what the right refactor to do here is.
| !ty.is_notimplemented(db) | ||
| }); | ||
| } | ||
| if is_specifying_for_statement_iterable(&parsed, offset, typed.as_deref()) { |
There was a problem hiding this comment.
I guess we don't technically have to call this unless we know we've added some keywords to the completions. And that only happens for scope based completions. So this will do an unnecessary AST check for object.attr and import completions.
But maybe we'll eventually rule out other things here besides keywords that we know can't be iterable. So I'm not sure it's worth trying to optimize this out?
I think part of my problem is that I don't have an intuition for the costs associated with calling covering_node. Maybe I'm tilting at windmills.
In
for x in <CURSOR>statements it's only valid to provide expressions that eventually evaluate to an iterable. While it's extremely difficult to know if something can evaulate to an iterable in a general case, there are some suggestions we know can never lead to an iterable. Most keywords are such and hence we remove them here.Summary
This suppresses statement-keywords from auto-complete suggestions in
for x in <CURSOR>statements where we know they can never be valid, as whatever is typed has to (at some point) evaluate to an iterable.It handles the core issue from astral-sh/ty#1774 but there's a lot of related cases that probably has to be handled piece-wise.
Test Plan
New tests and verifying in the playground.