Conversation
|
Let's see if the CI is working: /rerun-benchmarks |
|
Performance benchmarks:
|
|
Let me know when this is ready for review |
commit c0de4a1 Author: Ewout ter Hoeven <[email protected]> Date: Sun Jan 21 14:43:08 2024 +0100 Add CI workflow for performance benchmarks (mesa#1983) This commit introduces a new GitHub Actions workflow for performance benchmarking. The workflow is triggered on `pull_request_target` events and can be triggered with a "/rerun-benchmarks" comment in the PR. It should be compatible with PRs from both forks and the main repository. It includes steps for setting up the environment, checking out the PR and main branches, installing dependencies, and running benchmarks on both branches. The results are then compared, encoded, and posted as a comment on the PR.
This reverts commit 9485087.
It should be alright now. Please squash and merge this because I made a bit of a mess of the commit history. |
|
Could you update the PR message with a bit of motivation for the changes made? /rerun-benchmarks |
I already did so, or do you want to know more? But if so, what? |
|
Performance benchmarks:
|
|
Performance benchmarks:
|
|
LGTM, thanks for this effort. @rht or @Corvince could one of you do the final review and squash merge if ok? Did you also observe the Schelling being faster on your system? Or is that an anomaly? In the long run we should try to keep these models in sync with ABMFrameworksComparison. Do we want to make further improvements on these models in the short run? |
The sync with Agents.jl devs needs to happen soon, if possible. I think they would appreciate to know if we are now monitoring the performance as well. |
Haven't tested, but the finding is probably not a blocker to this PR being merged. |
I also noticed that, in particular, WolfSheep is not a good Python implementation. There is a lot of unnecessary code duplication going on. I have the impression that some implementations were done to mimic NetLogo's version. Which is fine for a tutorial for people coming from NetLogo. However, I would argue that the benchmark (and most examples) should reflect the Pythonic and Mesa way of doing things. I am happy to push a WolfSheep reimplementation if desired (But also see https://github.com/quaquel/mesa/blob/cell-space/benchmarks/WolfSheep/wolf_sheep.py, for the direction in which I would change it. |
|
Sounds good, would you like to do both (pushing to ABMFrameworksComparison and speeding up WolfSheep yourself? |
|
Yes, I'm fine with doing both (although it is more about cleaning up wolfsheep code; I have no idea about performance, but it is probably small). |
|
I think the first step is to simplify the WolfSheep at https://github.com/projectmesa/mesa-examples, then next all the other WolfSheep's (in Mesa's benchmark and Agents.jl's benchmark) should be updated to be almost the same as this reference implementation. |
Uh oh!
There was an error while loading. Please reload this page.