feat(linter): support shared rule state in ctx#5770
feat(linter): support shared rule state in ctx#5770shulaoda wants to merge 3 commits intooxc-project:mainfrom
ctx#5770Conversation
Your org has enabled the Graphite merge queue for merging into mainAdd the label “0-merge” to the PR and Graphite will automatically add it to the merge queue when it’s ready to merge. Or use the label “hotfix” to add to the merge queue as a hot fix. You must have a Graphite account and log in to Graphite in order to use the merge queue. Sign up using this link. |
|
How about this? @Boshen |
Boshen
left a comment
There was a problem hiding this comment.
🤔 I'm not sure if this is going to work, can you continue prototyping by collecting state in run, and then inspect the state in run_once moved to the end? We probably need to implement this prototype with two rules.
I'd imagine the states need to be isolated, rule A cannot read data from rule B.
9b8f913 to
c904595
Compare
CodSpeed Performance ReportMerging #5770 will not alter performanceComparing Summary
|
|
Let me reconsider, it seems like it's not feasible and a bit difficult. 😩 |
|
I've got some ideas on how to implement this. I'll take a closer look tomorrow. |
> Related to #5770 ## What This PR Does Moves state that is constant over a linted file out of `LintContext` and into a shared `ContextHost` struct, turning `LintContext` into a [flyweight](https://en.wikipedia.org/wiki/Flyweight_pattern). This brings `LintContext` from 144 bytes down to 96. `Linter::run` iterates over `(rule, ctx)` pairs in a very tight loop, and each rule instance gets its own clone of `ctx`. A smaller `LintContext` means better cache locality and a smaller heap allocation for this vec. I'm hoping to eventually get it small enough to fit in a single cache line. I made a PR a while ago that did something similar to this one, but instead of using an `Rc`, each `LintContext` stored a read-only reference. This added an extra lifetime parameter, making it slightly unwieldy, but I saw up to 15%/25% perf improvements on local benchmarks. I'll dig around for it and add a link shortly. ### Architecture  ---- 
Related to #5204