Skip to content

Add internal benchmark tool#9020

Merged
adalinesimonian merged 1 commit intomainfrom
benchmark-tool
Feb 5, 2026
Merged

Add internal benchmark tool#9020
adalinesimonian merged 1 commit intomainfrom
benchmark-tool

Conversation

@adalinesimonian
Copy link
Copy Markdown
Member

@adalinesimonian adalinesimonian commented Feb 4, 2026

I was working on seeing if there were some ways to optimize Stylelint's performance, but needed a way to reliably get the performance of the tool on some kind of mock workspaces, and in a way that was easy for me to interpret and compare without much hassle. To that end, I've written a benchmarking tool, which I'm introducing in this PR.

This tool generates, deterministically, several different mock workspaces of different sizes and config complexities. It runs Stylelint, both with the API and the CLI, against these workspaces, and gathers performance data. It can record this data for later comparison, so that you get some baseline data on your machine, then compare the baseline performance against performance after your changes.

Since this tool is really likely only going to be used by contributors really getting into the weeds of optimization problems, I've relegated some simple docs for it to the perf/ directory, rather than cluttering our focused and concise contributor docs in CONTRIBUTING.md.

A small screenshot of the comparison output that I was using to validate the changes in #9021:

bilde

@adalinesimonian adalinesimonian self-assigned this Feb 4, 2026
@adalinesimonian adalinesimonian requested a review from a team as a code owner February 4, 2026 20:32
@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Feb 4, 2026

⚠️ No Changeset found

Latest commit: 2b32c47

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Feb 4, 2026

This PR is packaged and the instant preview is available (2b32c47). View the demo website.

Install it locally:

npm i -D https://pkg.pr.new/stylelint@2b32c47

};

// CSS content templates for generating realistic files.
export const CSS_TEMPLATES = [
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion

Something that would be a nice addition—not for now though—would be to add a flag to pass a file instead of using generateCSSContent.

@Mouvedia
Copy link
Copy Markdown
Member

Mouvedia commented Feb 4, 2026

Hey @adalinesimonian,

This looks quite useful, thanks!

Just wondering why didn't you use an existing package for the benchmark part?
e.g. bench-node
I don't care whether we do or not, I am just curious as to why you chose not to.

@adalinesimonian
Copy link
Copy Markdown
Member Author

Just wondering why didn't you use an existing package for the benchmark part? e.g. bench-node I don't care whether we do or not, I am just curious as to why you chose not to.

The benchmarking that I was looking for, which was basically just a pretty simple rundown of the API and CLI's lint run times, felt straightforward enough that it was not worth it to me to bring in a dependency for it. Most of the fun stuff in this PR is the deterministic generation of the test workspaces, and that also felt like a pretty simple thing to have here.

I think if we wanted to do some more advanced analysis, it might be worth pulling in a more featureful package. But at least for the functionality I needed, this seemed straightforward and simple enough. Potentially more maintenance, but it's not user-facing, and I doubt it'll demand frequent iteration. And a small plus of having it here means one less dependency, ergo one less attack vector for a supply chain attack.

Copy link
Copy Markdown
Member

@ybiquitous ybiquitous left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adalinesimonian Thanks for the great PR! 👍🏼


[question] Finally, can we unify the existing benchmark-rule into the new benchmark tool?

https://github.com/stylelint/stylelint/blob/f673be9a8d284fccd8f841584e9053fb68e26300/scripts/benchmark-rule.mjs


[question] After merging this PR, does it make sense to run this benchmark for every PR and post results to PR comments?

@adalinesimonian
Copy link
Copy Markdown
Member Author

[question] Finally, can we unify the existing benchmark-rule into the new benchmark tool?

Looks like that is a simple benchmark of running some rules against a single file. It's probably possible to do something like that here, but the goal of this script as currently written is to test against a realistic-ish workspace rather than a single file. Technically possible, maybe alongside @Mouvedia's suggestion for custom CSS, but I think that can maybe wait until someone runs into the need for it, i.e. they want to run the benchmark analysis this tool provides but against that custom CSS or against a single file.

[question] After merging this PR, does it make sense to run this benchmark for every PR and post results to PR comments?

There's a lot of variability in the conditions CI experiences which we have little control over, so I think that limits the usefulness of running this in CI. It also, by its nature, is meant to take some considerable time to run, and it's hard to interpret the results meaningfully without enough iterations and without running it against all workspace sizes. Therefore, I think even though we, in theory, could run it in CI against e.g. only small workspaces, that wouldn't give us very meaningful data.

@ybiquitous
Copy link
Copy Markdown
Member

Understood, thank you for the quick answers. My questions didn't intend to block this PR. If we get other ideas after this PR, we could revisit them later. 👍🏼

@jeddy3
Copy link
Copy Markdown
Member

jeddy3 commented Feb 5, 2026

This looks great, thank you.

I'm also eager not to block this PR so we can move on to reviewing your other pull requests with the impressive performance gains! However, it'd be great find the best home for the code and docs from the start.

We surface the rule benchmark in our docs because performance is one of the aspects of our vision. The more people who are aware and use our benchmarking tools when contributing to Stylelint, the better. Let's do the same for this benchmarking tool by moving the docs to /maintainer-guide/benchmarking.md (and consolidating the rule benchmark docs there, too).

Let's move the code into scripts/benchmarking rather than a top-level perf, as scripts is our typical home for these things.

The rule and system benchmarking can remain separate even if the documentation is consolidated. CONTRIBUTING will remain concise and can link through to the benchmarking page, as it currently does for the benchmarking rules docs.

If you can move the code, I'm happy to push a commit later today for the docs to save you from getting embroiled in this repo's documentation structure. (There are some other restructuring PRs I'd like to open alongside it.)

@adalinesimonian
Copy link
Copy Markdown
Member Author

We surface the rule benchmark in our docs because performance is one of the aspects of our vision. The more people who are aware and use our benchmarking tools when contributing to Stylelint, the better. Let's do the same for this benchmarking tool by moving the docs to /maintainer-guide/benchmarking.md (and consolidating the rule benchmark docs there, too).

Done!

Let's move the code into scripts/benchmarking rather than a top-level perf, as scripts is our typical home for these things.

Done as well.

The rule and system benchmarking can remain separate even if the documentation is consolidated. CONTRIBUTING will remain concise and can link through to the benchmarking page, as it currently does for the benchmarking rules docs.

If you can move the code, I'm happy to push a commit later today for the docs to save you from getting embroiled in this repo's documentation structure. (There are some other restructuring PRs I'd like to open alongside it.)

I put the docs together as well; I put them both on the same page, but gave them different headers. Stole your wording from your PR comment for the intro, thought it was good stuff. 😁 Hope that's at least a good baseline, if you think there's room for improvement, feel free to throw some edits in.

Copy link
Copy Markdown
Member

@ybiquitous ybiquitous left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! LGTM 👍🏼

@adalinesimonian adalinesimonian merged commit 587b578 into main Feb 5, 2026
19 checks passed
@adalinesimonian adalinesimonian deleted the benchmark-tool branch February 5, 2026 09:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants