Skip to content

Comments

[ty] Remove invalid statement-keyword completions in for-statements#21979

Merged
BurntSushi merged 1 commit intoastral-sh:mainfrom
RasmusNygren:filter-incorrect-keyword-completions
Dec 15, 2025
Merged

[ty] Remove invalid statement-keyword completions in for-statements#21979
BurntSushi merged 1 commit intoastral-sh:mainfrom
RasmusNygren:filter-incorrect-keyword-completions

Conversation

@RasmusNygren
Copy link
Contributor

@RasmusNygren RasmusNygren commented Dec 14, 2025

In for x in <CURSOR> statements it's only valid to provide expressions that eventually evaluate to an iterable. While it's extremely difficult to know if something can evaulate to an iterable in a general case, there are some suggestions we know can never lead to an iterable. Most keywords are such and hence we remove them here.

Summary

This suppresses statement-keywords from auto-complete suggestions in for x in <CURSOR> statements where we know they can never be valid, as whatever is typed has to (at some point) evaluate to an iterable.

It handles the core issue from astral-sh/ty#1774 but there's a lot of related cases that probably has to be handled piece-wise.

Test Plan

New tests and verifying in the playground.

@astral-sh-bot
Copy link

astral-sh-bot bot commented Dec 14, 2025

Diagnostic diff on typing conformance tests

No changes detected when running ty on typing conformance tests ✅

@astral-sh-bot
Copy link

astral-sh-bot bot commented Dec 14, 2025

mypy_primer results

Changes were detected when running on open source projects
pandas-stubs (https://github.com/pandas-dev/pandas-stubs)
+ pandas-stubs/_typing.pyi:1218:16: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
- Found 5136 diagnostics
+ Found 5137 diagnostics

No memory usage changes detected ✅

@AlexWaygood AlexWaygood added server Related to the LSP server ty Multi-file analysis & type inference labels Dec 14, 2025
Comment on lines 470 to 480
if is_specifying_for_statement_iterable(&parsed, offset, typed.as_deref()) {
// Remove statement-keywords that can never be valid as we
// expect an expression.
completions.retain(|item| {
let Some(kind) = item.kind else { return true };
if kind != CompletionKind::Keyword {
return true;
}
matches!(item.name.as_str(), "await" | "lambda")
});
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might make more sense to make add_keyword_completions context-aware and avoid adding the incorrect completions to begin with rather than pruning them here after the fact.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm not quite sure what the right architecture ought to be here. I suppose we could end up ruling out other completions as well.

Maybe having the completion collector "know" what the context is, and then its add methods could choose to ignore completions added that are incompatible with that context. But I think that's a bigger refactor.

///
/// `typed` should be the text immediately before the
/// provided cursor `offset`.
fn typed_text_range(typed: Option<&str>, offset: TextSize) -> TextRange {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably makes sense here to make find_typed_text return a struct that contains both the text and the range so we don't have to make these computations again from the bare string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think that's a good idea. Doesn't have to be in this PR though.

@RasmusNygren RasmusNygren force-pushed the filter-incorrect-keyword-completions branch from 2f71a24 to e3f05f5 Compare December 14, 2025 21:25
In `for x in <CURSOR>` statements it's only valid to provide
expressions that eventually evaluate to an iterable. While
it's extremely difficult to know if something can evaulate to
an iterable in a general case, there are some suggestions we know
can never lead to an iterable. Here we remove keywords
which can never evaluate to an iterable.
@RasmusNygren RasmusNygren force-pushed the filter-incorrect-keyword-completions branch from e3f05f5 to 57e711a Compare December 14, 2025 21:32
@RasmusNygren RasmusNygren marked this pull request as ready for review December 14, 2025 21:57
Copy link
Member

@BurntSushi BurntSushi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! I think this LGTM as-is. I left some comments, but I'm not sure there's anything actionable from them right now.

///
/// `typed` should be the text immediately before the
/// provided cursor `offset`.
fn typed_text_range(typed: Option<&str>, offset: TextSize) -> TextRange {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think that's a good idea. Doesn't have to be in this PR though.

matches!(
node, ast::AnyNodeRef::StmtFor(stmt_for) if stmt_for.iter.range().contains_range(range)
)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remain slightly concerned about adding more and more of these "do we match this AST pattern?" checks. I think it's fine for now, but:

  1. I think eventually this is going to impact perf.
  2. The "check this case, then this case, and then this case" approach already buckled under its own weight for imports. Now we have a "look at the stuff we have once, and classify our case based on that."

I don't think there's any action item here right now. Mostly just voicing my thoughts.

Copy link
Contributor Author

@RasmusNygren RasmusNygren Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gathering all my thoughts here instead of responding in the various threads.

I share your concerns and in theory I like the approach of building a context (before adding any completions) that has enough information such that we can later determine what completions should be added or ignored. Ignoring that this would be a big refactor, what slightly concerns me with that approach is that the size and the amount of information we would need to capture in the context could get out of hand, in order to support all the different cases where we might want to add or suppress different suggestions. So in practice I'm not quite sure how you would architect that to make it intuitive to work with.

To the extent it's doable, I do agree that we at the very least should stick to something like a single token-based pruning loop and a single AST-traversal pruning loop (I don't see how we can get away without both as long as we do any pruning at all) that either mutate completions in-place or builds some list of predicate functions that can be used to filter completions. This at least gets us away from a whole bunch of individual checks that all do their own AST traversal to remove bad suggestions. I'm not sure this would be quite as simple to do when we add new completions though (without bigger refactors), as that usually requires more work.

At some point a refactor in some direction is probably due though to make this scale nicer (both in terms of performance and readability), but to me it's not obvious what the right refactor to do here is.

!ty.is_notimplemented(db)
});
}
if is_specifying_for_statement_iterable(&parsed, offset, typed.as_deref()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we don't technically have to call this unless we know we've added some keywords to the completions. And that only happens for scope based completions. So this will do an unnecessary AST check for object.attr and import completions.

But maybe we'll eventually rule out other things here besides keywords that we know can't be iterable. So I'm not sure it's worth trying to optimize this out?

I think part of my problem is that I don't have an intuition for the costs associated with calling covering_node. Maybe I'm tilting at windmills.

@BurntSushi BurntSushi merged commit d6a5bbd into astral-sh:main Dec 15, 2025
67 of 68 checks passed
dcreager added a commit that referenced this pull request Dec 15, 2025
* origin/main:
  [ty] Consistent ordering of constraint set specializations, take 2 (#21983)
  [ty] Remove invalid statement-keyword completions in for-statements (#21979)
  [ty] Avoid caching trivial is-redundant-with calls (#21989)
@RasmusNygren RasmusNygren deleted the filter-incorrect-keyword-completions branch January 15, 2026 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

server Related to the LSP server ty Multi-file analysis & type inference

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants