Run mypy-primer with TY_MAX_PARALLELISM=1#19059
Conversation
|
|
It looks like this is about a minute slower (4m as compared to 3m), which seems reasonable? A separate job would likely take more than a minute anyways. |
dhruvmanila
left a comment
There was a problem hiding this comment.
It looks like this is about a minute slower (4m as compared to 3m), which seems reasonable? A separate job would likely take more than a minute anyways.
The increase in time seems reasonable to me given that cargo test takes around 4 minutes on Linux.
I'm assuming this can only be verified after merging this PR? |
It looks more like ~1m30s slower to me -- it looks like primer generally takes around 3m10s on I really love how quickly we get feedback from mypy_primer on PRs currently -- it makes it very easy to quickly spot bugs and iterate on PRs. From that perspective, I would selfishly sort-of prefer the memory report to be a separate primer job that runs concurrently with the existing primer job, so that we can still get feedback on the impact on diagnostics as quickly as possible. I'd expect ty PRs to change the diagnostics being emitted much more often than they have a significant impact on memory usage, anyway. Neither CI job would be marked as "required", so we would still be able to merge PRs even if one or neither job hadn't yet completed and we wanted to merge a trivial PR quickly. The only cost would be additional CI resources -- but I actually think that having primer reports be as quick as possible might be worth that cost. Curious if @sharkdp has opinions |
I was hesitant to bring it up, but I fully agree with you. mypy_primer feedback is also a crucial development tool for me. I am often waiting for its output. So in that sense, every saved minute is a developer-experience improvement. But it's probably also a bit annoying to create a completely separate job for this. |
|
I guess the problem with this approach is that we're bottlenecked on the largest project instead of evenly spreading the load. I'm fine to make this a separate job if the speed is important. There's also probably some things we could do to speed it up as is. |
## Summary As discussed in #19059.
## Summary As discussed in #19059.
Summary
The memory usage numbers are currently flaky partly due to memory usage being non-deterministic in multi-threaded runs. Running ty in single-threaded mode in the mypy-primer mode should fix this. From what I understand, mypy-primer has parallelism internally so this might not be significantly slower, but if it is then we'll need a separate CI job for memory usage.