Conversation
|
|
This PR is packaged and the instant preview is available (2b32c47). View the demo website. Install it locally: npm i -D https://pkg.pr.new/stylelint@2b32c47 |
| }; | ||
|
|
||
| // CSS content templates for generating realistic files. | ||
| export const CSS_TEMPLATES = [ |
There was a problem hiding this comment.
suggestion
Something that would be a nice addition—not for now though—would be to add a flag to pass a file instead of using generateCSSContent.
|
Hey @adalinesimonian, This looks quite useful, thanks! Just wondering why didn't you use an existing package for the benchmark part? |
The benchmarking that I was looking for, which was basically just a pretty simple rundown of the API and CLI's lint run times, felt straightforward enough that it was not worth it to me to bring in a dependency for it. Most of the fun stuff in this PR is the deterministic generation of the test workspaces, and that also felt like a pretty simple thing to have here. I think if we wanted to do some more advanced analysis, it might be worth pulling in a more featureful package. But at least for the functionality I needed, this seemed straightforward and simple enough. Potentially more maintenance, but it's not user-facing, and I doubt it'll demand frequent iteration. And a small plus of having it here means one less dependency, ergo one less attack vector for a supply chain attack. |
ybiquitous
left a comment
There was a problem hiding this comment.
@adalinesimonian Thanks for the great PR! 👍🏼
[question] Finally, can we unify the existing benchmark-rule into the new benchmark tool?
[question] After merging this PR, does it make sense to run this benchmark for every PR and post results to PR comments?
Looks like that is a simple benchmark of running some rules against a single file. It's probably possible to do something like that here, but the goal of this script as currently written is to test against a realistic-ish workspace rather than a single file. Technically possible, maybe alongside @Mouvedia's suggestion for custom CSS, but I think that can maybe wait until someone runs into the need for it, i.e. they want to run the benchmark analysis this tool provides but against that custom CSS or against a single file.
There's a lot of variability in the conditions CI experiences which we have little control over, so I think that limits the usefulness of running this in CI. It also, by its nature, is meant to take some considerable time to run, and it's hard to interpret the results meaningfully without enough iterations and without running it against all workspace sizes. Therefore, I think even though we, in theory, could run it in CI against e.g. only small workspaces, that wouldn't give us very meaningful data. |
|
Understood, thank you for the quick answers. My questions didn't intend to block this PR. If we get other ideas after this PR, we could revisit them later. 👍🏼 |
f504aea to
ee17aab
Compare
|
This looks great, thank you. I'm also eager not to block this PR so we can move on to reviewing your other pull requests with the impressive performance gains! However, it'd be great find the best home for the code and docs from the start. We surface the rule benchmark in our docs because performance is one of the aspects of our vision. The more people who are aware and use our benchmarking tools when contributing to Stylelint, the better. Let's do the same for this benchmarking tool by moving the docs to Let's move the code into The rule and system benchmarking can remain separate even if the documentation is consolidated. CONTRIBUTING will remain concise and can link through to the benchmarking page, as it currently does for the benchmarking rules docs. If you can move the code, I'm happy to push a commit later today for the docs to save you from getting embroiled in this repo's documentation structure. (There are some other restructuring PRs I'd like to open alongside it.) |
ee17aab to
807472d
Compare
Done!
Done as well.
I put the docs together as well; I put them both on the same page, but gave them different headers. Stole your wording from your PR comment for the intro, thought it was good stuff. 😁 Hope that's at least a good baseline, if you think there's room for improvement, feel free to throw some edits in. |
807472d to
2b32c47
Compare
I was working on seeing if there were some ways to optimize Stylelint's performance, but needed a way to reliably get the performance of the tool on some kind of mock workspaces, and in a way that was easy for me to interpret and compare without much hassle. To that end, I've written a benchmarking tool, which I'm introducing in this PR.
This tool generates, deterministically, several different mock workspaces of different sizes and config complexities. It runs Stylelint, both with the API and the CLI, against these workspaces, and gathers performance data. It can record this data for later comparison, so that you get some baseline data on your machine, then compare the baseline performance against performance after your changes.
Since this tool is really likely only going to be used by contributors really getting into the weeds of optimization problems, I've relegated some simple docs for it to the perf/ directory, rather than cluttering our focused and concise contributor docs in CONTRIBUTING.md.
A small screenshot of the comparison output that I was using to validate the changes in #9021: