Skip to content

Unify event list between Model and Simulator#3204

Merged
EwoutH merged 5 commits intomesa:mainfrom
EwoutH:internal_events
Jan 25, 2026
Merged

Unify event list between Model and Simulator#3204
EwoutH merged 5 commits intomesa:mainfrom
EwoutH:internal_events

Conversation

@EwoutH
Copy link
Copy Markdown
Member

@EwoutH EwoutH commented Jan 24, 2026

Summary

Refactors the internal architecture so that Model owns a single event list (_event_list) that simulators use via a property. This lays the groundwork for a unified event-based API while maintaining full backward compatibility.

Motive

We're moving toward an event-based API where both model.step() and simulator-driven execution use the same underlying mechanism. Previously, the Simulator owned its own EventList, creating two separate systems. This PR unifies them so there's one event list with two ways to drive it:

  1. model.step() → advances 1 time unit via _advance_time()
  2. simulator.run_until()/run_for() → processes events from the same list

Implementation

  • Model: Added _event_list attribute and _advance_time(until) method that processes events up to a given time
  • Simulator: Changed event_list from instance variable to property that returns model._event_list
  • ABMSimulator: Simplified by adding _schedule_step() and _do_step() methods, removed duplicate run_until() override
  • SimulationEvent: Added __getstate__/__setstate__ for pickling support with weak references
  • Cleaned up redundant code and unified the event processing logic

Usage Examples

No public API changes. Existing code works exactly as before:

# Direct stepping (unchanged)
model = Model()
model.step()  # Internally uses _advance_time()

# Simulator-driven (unchanged)  
simulator = ABMSimulator()
simulator.setup(model)
simulator.run_for(10)  # Uses model._event_list

Additional Notes

  • All existing tests pass with minimal modifications (model isn't removed on a reset)
  • Added one test to verify simulator uses model's event list
  • This is purely internal refactoring with no breaking changes
  • Prepares for future public event scheduling API on Model

Comment on lines +322 to +331
def _schedule_step(self, time: int) -> None:
"""Schedule the model step at the given time."""
event = SimulationEvent(time, self._do_step, priority=Priority.HIGH)
self.event_list.add_event(event)

def _do_step(self) -> None:
"""Execute one step: call user step, increment steps, schedule next."""
self.model.steps += 1
self.model._user_step()
self._schedule_step(int(self.model.time) + 1)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the idea behind this and should this not use the EventGenerator?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that no matter if you progress time with model.step() or simulator.run_for(1) you get the exact same behavior. I hope this will make it easier to deprecate old behavior and add public APIs for new ones.

It might have used EventGenerator. For now this was the minimal-change version to get this working.

self._advance_time(self.time + 1)
self._user_step(*args, **kwargs)

def _advance_time(self, until: float) -> None:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I would put this on the model directly. See the earlier conversation on encapsulating the different parts instead of having a blob model class.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a bigger discussion where we put the public API. This is a private method that bridges the gap between the step() time progression in Mesa 3 and whatever we will have in Mesa 4. It will be removed once processing time with step() is deactivated.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a bigger discussion where we put the public API.

Yes, I agree on this. I see a few options. One of which is what I think you did in an earlier PR: have a bunch of "passthrough" methods on the model which internally call the appropriate method on the underlying class. Another option would be to use multiple inheritance, although that can become tricky quickly.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m indeed exploring a Scheduling Mixin. It is quite clean, full IDE and typing support, and can be added to both the Model and Agent.

This is still some essential foundation to be able to properly deprecate old behavior.

@github-actions
Copy link
Copy Markdown

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🟢 -4.4% [-5.3%, -3.5%] 🔵 +0.6% [+0.5%, +0.8%]
BoltzmannWealth large 🔵 +3.5% [-5.9%, +13.1%] 🔵 -1.2% [-5.6%, +2.6%]
Schelling small 🔵 -0.9% [-1.3%, -0.5%] 🔵 +0.4% [+0.3%, +0.5%]
Schelling large 🔵 +3.0% [-3.2%, +9.0%] 🔵 +0.9% [-1.2%, +2.6%]
WolfSheep small 🔵 +0.8% [-2.2%, +4.1%] 🔴 +78.4% [+76.0%, +80.9%]
WolfSheep large 🔵 -5.0% [-21.8%, +13.2%] 🔴 +1364.1% [+1342.0%, +1384.1%]
BoidFlockers small 🔵 -0.6% [-0.9%, -0.2%] 🔵 -0.8% [-1.0%, -0.6%]
BoidFlockers large 🔵 +0.7% [+0.3%, +1.1%] 🔵 -0.7% [-0.9%, -0.6%]

@EwoutH EwoutH added trigger-benchmarks Special label that triggers the benchmarking CI and removed trigger-benchmarks Special label that triggers the benchmarking CI labels Jan 24, 2026
@EwoutH EwoutH added trigger-benchmarks Special label that triggers the benchmarking CI and removed trigger-benchmarks Special label that triggers the benchmarking CI labels Jan 24, 2026
@github-actions
Copy link
Copy Markdown

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔵 -1.7% [-2.3%, -1.1%] 🔵 +2.3% [+1.9%, +2.5%]
BoltzmannWealth large 🔵 -3.1% [-7.1%, +0.3%] 🔵 -2.0% [-3.8%, -0.2%]
Schelling small 🔵 -0.9% [-1.2%, -0.6%] 🔵 +0.3% [+0.2%, +0.5%]
Schelling large 🔵 +0.2% [-0.1%, +0.5%] 🔵 +0.3% [-0.9%, +1.4%]
WolfSheep small 🔵 -0.1% [-1.7%, +1.6%] 🔵 +0.4% [+0.1%, +0.7%]
WolfSheep large 🔵 +11.4% [-6.1%, +29.1%] 🔵 +3.0% [+0.9%, +5.6%]
BoidFlockers small 🔵 +0.7% [+0.4%, +1.0%] 🔵 +0.6% [+0.5%, +0.8%]
BoidFlockers large 🔵 +1.8% [+1.1%, +2.4%] 🔵 +0.6% [+0.4%, +0.8%]

@github-actions

This comment was marked as outdated.

@EwoutH EwoutH added trigger-benchmarks Special label that triggers the benchmarking CI and removed trigger-benchmarks Special label that triggers the benchmarking CI labels Jan 24, 2026
@github-actions
Copy link
Copy Markdown

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔵 -1.7% [-2.7%, -0.7%] 🔴 +4.3% [+4.2%, +4.5%]
BoltzmannWealth large 🔵 +5.2% [-5.4%, +17.6%] 🔵 +1.9% [-1.8%, +6.1%]
Schelling small 🔵 -0.9% [-1.3%, -0.4%] 🔵 -0.5% [-0.6%, -0.4%]
Schelling large 🔵 +1.5% [-3.0%, +5.7%] 🔵 +1.2% [-1.0%, +3.5%]
WolfSheep small 🔵 +0.1% [-2.7%, +2.8%] 🔵 +0.4% [-0.1%, +0.9%]
WolfSheep large 🔵 -3.1% [-19.4%, +15.3%] 🔵 +0.3% [-1.0%, +1.5%]
BoidFlockers small 🔵 +1.0% [+0.6%, +1.4%] 🔵 +0.6% [+0.4%, +0.7%]
BoidFlockers large 🔵 +2.1% [+1.6%, +2.5%] 🔵 +0.3% [-0.1%, +0.6%]

@EwoutH
Copy link
Copy Markdown
Member Author

EwoutH commented Jan 24, 2026

I accidentally touched the internal event scheduling loop, which of course was highly optimized. Reverted to the old loop, performance restored.

Coverage failure is unrelated.

@quaquel
Copy link
Copy Markdown
Member

quaquel commented Jan 25, 2026

Can you check why coverage is only 80%?

@EwoutH
Copy link
Copy Markdown
Member Author

EwoutH commented Jan 25, 2026

I didn’t add test for the setstate and get state. I already split it of in #3205, I will address it there.

@EwoutH
Copy link
Copy Markdown
Member Author

EwoutH commented Jan 25, 2026

While convenient, I don't like the "Update branch from main" applied on my development branches. The merge commits makes undoing a commit, cherry picking and rebasing more difficult.

I can rebase on request :)

One event list, one unified way to advance time. Whether you call model.step() or use a simulator, it should work the same way underneath.

One event list, two ways to drive it:
1. model.step() → advances 1 time unit, runs user step
2. simulator.run_until() → advances to end_time, processing scheduled events (including steps for ABMSimulator)
@EwoutH
Copy link
Copy Markdown
Member Author

EwoutH commented Jan 25, 2026

Can you check why coverage is only 80%?

Now all green, including coverage


from mesa.agent import Agent, AgentSet
from mesa.experimental.devs import Simulator
from mesa.experimental.devs.eventlist import EventList, Priority, SimulationEvent
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since you are now integrating everything into model, when do you plan to move devs out of experimental?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I think as soon as we have a stable public API might make sense.

@EwoutH EwoutH merged commit e99e37d into mesa:main Jan 25, 2026
14 checks passed
EwoutH added a commit that referenced this pull request Feb 11, 2026
### Summary
Exposes event scheduling and time advancement methods directly on `Model`, giving users a public API for scheduling events and advancing simulation time without needing to interact with the `Simulator` class.

Basically, these methods allow us:
- to replace all existing behavior (in examples, tutorials, etc.)
- start deprecation all current scheduling and stepping APIs
- remove them in Mesa 4.

### Motive
Currently, users advance their models by calling `model.step()`, which is tightly coupled to the concept of a single discrete step. With the event-based architecture taking shape (#3201, #3204), we need public methods that let users think in terms of *time* rather than *steps*.

The key method here is `model.run_for(duration)`. For existing ABM users, `model.run_for(1)` is functionally equivalent to `model.step()`: it advances time by 1 unit, which triggers the scheduled step event. But unlike `step()`, it generalizes naturally: `run_for(0.5)` advances half a time unit, `run_for(100)` advances 100 units processing all scheduled events along the way. This makes `run_for` the foundation for both traditional ABMs and event-driven models.

This is also a stepping stone toward deprecating `model.step()` as the primary way to advance a model. Instead of calling a method that *is* a step, users call a method that *advances time*, and steps happen as scheduled events within that time. The mental model shifts from "execute step N" to "advance time, and whatever is scheduled will run."
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

maintenance Release notes label

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants