Skip to content

Add initialization check in Simulator.run_for()#3036

Merged
quaquel merged 8 commits intomesa:mainfrom
codebreaker32:featadd-exception
Dec 30, 2025
Merged

Add initialization check in Simulator.run_for()#3036
quaquel merged 8 commits intomesa:mainfrom
codebreaker32:featadd-exception

Conversation

@codebreaker32
Copy link
Copy Markdown
Collaborator

@codebreaker32 codebreaker32 commented Dec 29, 2025

Summary

Implemented a mandatory initialization check in Simulator.run_for() to explicitly validate that the simulator has been set up before execution.

Motive

The run_for method previously contained a #fixme comment noting the need for an initialization check. Without this check, attempting to run the simulator without a model lead to error

Nonetype has no attribute 'time'

Implementation

Replaced the placeholder comment with a conditional check.

if self.model is None:
    raise RuntimeError(
        "Simulator not set up. Call simulator.setup(model) first."
    )

Usage Examples

This change ensures that improper usage is caught immediately:

  • Forgot to call sim.setup(model)
try:
    sim.run_for(10)
except RuntimeError as e:
    print(e) 
    # Output: Simulator not set up. Call simulator.setup(model) first.

@github-actions
Copy link
Copy Markdown

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔴 +4.5% [+3.7%, +5.2%] 🔵 +1.5% [+1.2%, +1.7%]
BoltzmannWealth large 🔵 +1.1% [-0.6%, +3.9%] 🔵 +4.1% [+2.3%, +5.8%]
Schelling small 🔵 +0.1% [-0.8%, +1.1%] 🔵 +0.2% [-0.3%, +0.7%]
Schelling large 🔵 +1.1% [+0.1%, +2.8%] 🔴 +4.6% [+4.0%, +5.2%]
WolfSheep small 🔵 +1.2% [+0.4%, +1.8%] 🔵 +1.4% [+1.2%, +1.7%]
WolfSheep large 🔵 +1.2% [-2.2%, +3.7%] 🔴 +4.2% [+3.1%, +5.2%]
BoidFlockers small 🔵 +0.3% [-0.2%, +0.8%] 🔵 +1.3% [+1.1%, +1.5%]
BoidFlockers large 🔵 -0.4% [-0.8%, +0.1%] 🔵 +0.5% [+0.1%, +0.7%]

@codebreaker32 codebreaker32 changed the title Featadd exception Add initialization check in Simulator.run_for() Dec 29, 2025
@codebreaker32
Copy link
Copy Markdown
Collaborator Author

Made the changes @quaquel

@quaquel quaquel added bug Release notes label trigger-benchmarks Special label that triggers the benchmarking CI labels Dec 30, 2025
@quaquel
Copy link
Copy Markdown
Member

quaquel commented Dec 30, 2025

Can you update tests/experimental/test_devs.py with appropriate tests that cover both the bug and the updated behavior? Once those tests are in, this seems good to go.

@github-actions
Copy link
Copy Markdown

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔵 +2.0% [+1.2%, +2.7%] 🔵 +0.2% [-0.2%, +0.4%]
BoltzmannWealth large 🔵 +2.2% [-0.0%, +6.4%] 🔵 +1.6% [+0.5%, +2.6%]
Schelling small 🔵 +0.2% [-1.4%, +2.0%] 🔵 +0.5% [-0.4%, +1.6%]
Schelling large 🔵 +1.3% [-0.1%, +4.0%] 🔵 +1.6% [+0.8%, +2.4%]
WolfSheep small 🔵 -0.5% [-1.7%, +0.3%] 🔵 -0.7% [-0.8%, -0.6%]
WolfSheep large 🔵 -2.3% [-7.3%, +0.5%] 🔵 -1.4% [-2.6%, -0.0%]
BoidFlockers small 🔵 +0.4% [+0.0%, +0.7%] 🔵 +0.2% [+0.1%, +0.3%]
BoidFlockers large 🔵 +0.4% [+0.0%, +0.7%] 🔵 -0.1% [-0.2%, +0.1%]

@codebreaker32
Copy link
Copy Markdown
Collaborator Author

I have added the required tests also

@quaquel quaquel removed the trigger-benchmarks Special label that triggers the benchmarking CI label Dec 30, 2025
@quaquel quaquel merged commit 0259426 into mesa:main Dec 30, 2025
14 checks passed
@codebreaker32 codebreaker32 deleted the featadd-exception branch December 31, 2025 07:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Release notes label

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants