Conversation
|
Performance benchmarks:
|
|
Ok, these results are better than when testing locally, so this seems a reasonable solution. Now we just need to add support for pickling the model because |
This comment was marked as duplicate.
This comment was marked as duplicate.
commit 4b71cbe Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:57:16 2026 +0100 Update meta_agent.py commit 702944a Author: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue Jan 20 06:56:05 2026 +0000 [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci commit 0f2c81a Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:54:25 2026 +0100 Update meta_agent.py commit 1820d89 Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:53:24 2026 +0100 Update meta_agent.py
|
Performance benchmarks:
|
|
Hi @quaquel I was also working on it to fix the example error and modifying register_agent passed all the test # In model.py
def register_agent(self, agent: Agent) -> None:
# Check if the agent already has a valid ID
if agent.unique_id is not None:
# It's already registered! Don't touch the ID.
# Just ensure it's in the internal list if needed.
self._agents[agent] = None
return
# Only generate a NEW ID if it completely lacks one
agent.unique_id = next(self.agent_id_counter)
self._agents[agent] = NoneThe reason I could've think of is(I might be wrong): # meta_agent.py
def add_constituting_agents(self, new_agents: set[Agent]):
for agent in new_agents:
self._constituting_set.add(agent)
agent.meta_agent = self
self.model.register_agent(agent) # <--- CulpritSuppose Agent A's id 5 is used as a key to store data in various dictionaries. Suddenly, the agent's unique_id attribute is overwritten to 105. The Python dictionaries are not corrupted, but they are now out of sync: the dictionary still holds the data under the old key (5), but the code is now trying to retrieve it using the new key (105). This mismatch leads to a KeyError or logic errors because the system looks for an ID that isn't there |
|
@codebreaker32, yes, this is indeed the source of the problem. But in my view, agents should never call register_agent twice. In fact, a Mesa user should never have to call it if they use super properly in their custom agents. So, I see this as a bug in |
commit e5b3a09 Author: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue Jan 20 08:15:05 2026 +0000 [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci commit 228b8b5 Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 09:14:54 2026 +0100 Update meta_agent.py commit 4b71cbe Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:57:16 2026 +0100 Update meta_agent.py commit 702944a Author: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue Jan 20 06:56:05 2026 +0000 [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci commit 0f2c81a Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:54:25 2026 +0100 Update meta_agent.py commit 1820d89 Author: Jan Kwakkel <[email protected]> Date: Tue Jan 20 07:53:24 2026 +0100 Update meta_agent.py
|
Performance benchmarks:
|
|
The benchmarks remain strange. For example, Boltman Wealth has no additional agents during the run, so nothing in this PR would change the runtime. |
|
Performance benchmarks:
|
EwoutH
left a comment
There was a problem hiding this comment.
I'm going to pre-approve this to unblock you.
Please document the problem, the solution, the rejected alternatives and the reasoning behind all those well. If this ever bites us back we can trace this back.
It was included in the start post. In short, moving the assignment of The main issue encountered next is that ideally |
A memory leak was discovered in Mesa where model instances could never be garbage collected after agents were created. The root cause was the `Agent._ids` class attribute—a `defaultdict` that stored references to model instances to ensure `unique_id` values were unique on a per-model basis. Because `_ids` was a class-level attribute that persisted across the Python process, any model instance used as a key in this dictionary maintained a hard reference indefinitely, preventing the garbage collector from cleaning up the model and all its associated objects (agents, grids, etc.) even after the model went out of scope or was explicitly deleted. This bug had significant practical consequences for Mesa users, particularly those running multiple simulations or batch experiments. Each time a model was instantiated and run within a function, the model objects would accumulate in RAM rather than being cleaned up when the function exited. This meant that running many model instances—common in parameter sweeps, sensitivity analyses, or optimization workflows—would cause unbounded memory growth, eventually exhausting available RAM. The issue was especially problematic because it was invisible to users: simply letting a model go out of scope or calling `del model` appeared to work but silently retained all the memory, and even explicitly removing agents with `model.remove_all_agents()` only partially addressed the problem depending on the space types used. The fix moved the `unique_id` assignment logic from the `Agent` class into the `Model.register_agent()` method, eliminating the problematic class-level `_ids` defaultdict entirely. Instead of tracking IDs across all model instances in a shared dictionary, each model now maintains its own `agent_id_counter` instance attribute that starts at 1 and increments with each registered agent. This approach ensures that `unique_id` remains unique within each model instance while allowing the garbage collector to properly clean up model objects when they go out of scope, since there are no longer any persistent class-level references to model instances. The fix also replaced `itertools.count` with simple integer incrementation, which avoids upcoming pickle compatibility issues in Python 3.14.
This is a bugfix for the memory leak identified in #3179.
The problem is that
Agent._idsis a defaultdict that stores references to model instances. This was done to ensure thatuniqiue_idis unique relative to a given model. However, since it is a class attribute, this reference persists across the Python process, preventing the entire model blob from being garbage-collected.There are various solutions
unique_idintoregister_agentModelto clean up (not yet explored), including removing the ref inAgent._idsHere, I implement option 3 because, among the options tested, it was the fastest locally. I also moved away from
itertool.countand instead just use an index that is being incremented. The main reason is that itertools.count will not be pickleable in Python 3.14, and count is overkill for the simple integer increments needed here anyway.For reasons that escape me at present, it is still necessary to remove all agents from the model before it can be garbage-collected, at least in the updated test_examples. But when I try a minimal version of Boltzmann, this seems unnecessary. So, there might be some other memory issue remaining.