Conversation
This comment was marked as off-topic.
This comment was marked as off-topic.
|
I really really like the idea of this. Kind of obvious, weird we didn't think of it earlier.
Oh yes this is always a chore. I would change one thing: What if we can stop doing this? Instead, why not just access scenario parameters directly? # Access scenario parameters directly
self.grid = MultiGrid(width, height, torus=True, rng=self.scenario.rng)
# Create agents based on scenario parameters
num_agents = int(width * height * self.scenario.density)
num_minority = int(num_agents * self.scenario.minority_pc)In this design, it's always explicit what's scenario-based, and what's just a static model variable. You also prevent making errors in the Model init. Implementation wise, I think you can use a (frozen) dataclass for this. We could even allow passing an equivalent Anyway, thanks for initiating this, really interesting direction! |
See my point 3. Basically, I am using a dict internally. I realize now that I can make attribute access work via
That was also on my list of things to check, but again, I am not sure due to its dynamic nature. I don't want to force users to subclass Scenario to make any of this work. update:
Scenario = make_dataclass(Scenario, [("a", int), ("b", float)])So rather cumbersome for no apparent benefit that I can see. |
From a policy analysis point of view, I agree. It makes good sense to separate policies and scenarios. However, this is a very specific vocabulary. Scenario is more common and often used as a catch-all for model input. The other option that I am considering is to just call it |
There was a problem hiding this comment.
Overall a clean implementation! I like it!
|
I think we should explicitly state in the documentation that users must perform any validation or logic on the Scenario before calling |
I am not sure what you mean. Could you elaborate? |
|
I mean doing this straight away will result in class MyModel(Model):
def __init__(self, scenario):
super().__init__(scenario=scenario)
if self.scenario.density < 0:
self.scenario.density = 0Either
I think either way, we should explicitly mention it in documentation |
|
Could we use the new Maybe don't enforce it now, but first think about time progression and run control. |
Out of curiosity, as I try to understand the direction Mesa is currently going: how could scenarios replace batch_run? Where would the variation of parameter values happen? |
See my answer on the mesa 4 discussion. In short, A next step is to have some helper functions to create a collection of scenarios. Since there are ample libraries out that for creating experimental designs (e.g., salib, numpy.stats.qmc, ema_workbench), I am inclined to start with a helper function that takes a dataframe or numpy array with a list of columns, which returns an iterable of experiments. I have to think a bit about how to splice in seeds in this, but that is a minor detail. |
|
@mariuzka, I just added two helper functions to scenario.py that might help clarify my intended direction: While adding these two helper functions, I realized that, from a computational experimentation perspective, there is an important difference between an experiment and any specific stochastic realization of the experiment (i.e., an experiment and a specific seed value). Ideally, there is a way of identifying both easily in post-processing. For example, you might want to group by experiment identifier before taking statistics over the stochastic realizations (i.e., replications using different seeds). Conceptually, a model need not be aware of either the scenario identifier or the experiment identifier. But when running experiments using, e.g., a |
While potentially the way to go, I think it's good to have a broader discussion about A) scenario generation and B) scenario object storage. Can we keep the helper function outside this PR, and discuss them for follow-up work in a separate thread? Maybe open a new discussion in which we can discuss scenarios and experimentation on a higher level. |
|
Let's just get this merged. I removed the two functions for now. |
This PR adds explicit support for running scenarios/experiments too Mesa. As discussed in the Mesa 4 goals discussion, it would be really convenient to have proper support for scenarios in Mesa. This PR is a first stab at achieving this.
the problem this solves
Running computational experiments, or scenarios, with ABMs is at the core of ABM use. However, Mesa does not currently offer first-class support for this. Instead, users have to roll their own, or Mesa makes assumptions (in the GUI and the batch runner). Basically, all scenario parameters are assumed to be passed as keyword arguments to the
Model.__init__. I personally tend to complement this with class-level attributes that I can easily change (e.g.,MyAgent.rationality = "expected_utility"). As a side effect of all this is that if you want to experiment with specific parameters deep down in your agents, you have to pass them first toModel.__init__and then to the__init__of your custom agent. Moreover, if you want to experiment over a wide range of parameters, yourModel.__init__becomes unwieldy with too many keyword arguments.implementation details
This PR adds a new
Scenarioclass and updates theModelclass to optionally use it. The scenario class takes keyword arguments only, and all keywords become available as attributes. I also decided to explicitly encapsulate the seeding of the random number generator into the scenario class using the spec-7 compliant rng argument.With all in place,
model.scenariobecomes your single source of truth for scenario parameters (hence the explicit inclusion ofrng). No need to pass stuff around anymore via__init__'s of tinker with class variables.API