Replies: 8 comments 10 replies
-
|
@EwoutH --- This is great! To me the hard challenge from a user model creation method is how do you build the ABM ecosystem so that users can rapidly put together valid models of their phenomenon of concern. The challenge from a repo perspective is always maintenance where there seems to be a trade-off between keeping core mesa simple and lightweight (easier to maintain) vs adding a lot of features (harder to maintain). Typically the idea was to have a larger ecosystem but as we can see with Mesa-examples, Mesa-frames and mesa-geo it becomes very difficult to maintain and keep them compatible. I tried to start MesaBehaviors but couldn't keep up with it. There is also Reusable Building Blocks for ABMs and even started MesaData Accepting that I am bringing up the tangential maintenance challenge and not addressing your proposal directly I think that is the underlying friction. On coming up with better ways to integrate features, my brainstorming thought is when we put something in a separate repo it becomes another problem that is easily ignored, I wonder if there is a way to link repos in a dynamic way so the overheard of CI/CD is kept in the main repo and then we have connected repos/libraries that effectively inherit the main repos management but can be lightweight in the sense they are only the relevant code?? Short version, the is great! To me the goal is thriving ecosystem where valid ABMs can be rapidly assembled. My only concern is the maintenance and I would go so far as to state the RL effort Harsh worked falls within this same umbrella. I will dig around and see if I can devops best practices of linking repos to reduce overhead (if that is even a thing). My view is all we can do is explore - exploit to solve this hard problem and learn by doing. |
Beta Was this translation helpful? Give feedback.
-
|
Not sure whether these would be useful, but just to add to the list of behavior frameworks based on cognitive science theories:
|
Beta Was this translation helpful? Give feedback.
-
|
Hi @EwoutH and everyone! I’m a new contributor (undergrad from IIT Dhanbad) aiming for GSoC 2026. I'm posting here to keep the Behavioral Framework ideas centralized. I’ve been digging into the codebase to prepare (I just submitted PR #2948 for some doc fixes) and I have a proposal based on analyzing the limitations in the standard Wolf-Sheep and mesa-llm examples (Civil Violence). I noticed in initial post under Action Framework mentioned that "Actions can have duration". I have thought of some ideas: 1. The "Time-Aware" Agent (Action Framework) What if every agent had a "Watch"? Currently, actions in examples like Wolf-Sheep are forced to be instant (step() forces Move → Eat → Reproduce in one tick). I propose Instead of hardcoded counters (like jail_sentence_left in the Epstein example), agents could trigger Durative Actions (e.g., This effectively gives every agent a "Watch," allowing them to be in a "Busy" state across ticks, directly addressing the need for "timing and 2. LLM-Driven Internal State (State Management) It was mentioned State Management should handle transitions like "sleeping/awake." I propose expanding this so an LLM can dynamically I’d love to hear your thoughts on this approach! |
Beta Was this translation helpful? Give feedback.
-
|
Hi @EwoutH and everyone — I’ve been following this thread and found the discussion around “building blocks below behaviors” really interesting, especially the idea of providing a kind of shared vocabulary rather than concrete behavior implementations. One thing I’ve noticed while building Mesa models is that a lot of behavioral logic ends up getting scattered across step(), scheduler ordering, and small state variables. It works, but over time it becomes harder to tell why an agent is behaving a certain way, or to change the decision logic without touching a lot of code. Framing states, decisions, and actions as explicit concepts feels useful mainly because it surfaces those assumptions, not because it forces a specific theory or pattern. In that sense, it feels closer to making models easier to reason about than to “adding features.” I was curious about one design aspect though: do you imagine these components staying completely optional and out of the way of existing Mesa patterns, or do you see them gradually nudging users toward a more declarative style once their models grow more complex? That trade-off seems closely tied to maintainability — keeping Mesa lightweight, while still giving users a cleaner path when step() logic starts to sprawl. Would like to hear how everyone thinks about that. |
Beta Was this translation helpful? Give feedback.
-
|
Hi everyone, I’m an undergraduate contributor preparing for a possible GSoC 2026 application. I’m looking into behavioural modelling patterns in Mesa. While I was reading the Behavioural Framework discussion, I noticed that goal-directed spatial navigation agents, such as drones or robots moving toward destinations in constrained environments, fit well with the proposed State, Decision, Action, Goal structure. In these agents:
I’m curious about how this type of navigation-oriented agent behaviour could be represented using the new behavioural abstractions, instead of using hardcoded movement logic inside step(). Would goal-directed navigation be seen as a relevant behavioural pattern for this framework? |
Beta Was this translation helpful? Give feedback.
-
|
I’ve been experimenting with modifying the Wolf–Sheep example to explore slightly more stateful / needs-driven behavior (e.g., separating hunger, reproduction readiness, and risk-avoidance signals instead of handling everything inline in step()). As decision logic grows, I’ve noticed that step() can quickly become a mix of:
This makes it harder to reason about agent complexity as behaviors scale. I’m curious whether others have experimented with patterns in Mesa for structuring internal agent state and decision logic more cleanly — for example:
Are there recommended approaches for keeping agent logic maintainable as behavioral complexity grows? |
Beta Was this translation helpful? Give feedback.
-
|
Hi @EwoutH and everyone — I've been building on this discussion as part of preparing a GSoC 2026 proposal, and I wanted to share something concrete rather than just ideas. Following @EwoutH's advice to actually build models and see what's missing, I spent time extending the Wolf–Sheep example to surface where step() logic genuinely becomes a problem. Here's the simplest version of what I kept running into: Current Wolf-Sheep wolf.step() — everything evaluated every tickdef step(self): Each condition is really expressing a reactive intent — "respond when X becomes true" — but the step-based model forces it to be checked regardless of whether anything changed. In small models this is fine. But as behavioral complexity grows (as @SAKSHI-DOT693 also noted), step() becomes a mix of state evolution, environment sensing, and action selection that's genuinely hard to decompose. The component framing in this discussion maps cleanly onto that problem: the State system could track what changed, and the Event system could route those changes to the right Action — rather than every agent polling every condition every tick. One specific design question I'm exploring for the proposal: should trigger evaluation live at the agent level or be batched at the model level? Agent-level is simpler to implement and test, but a model-level EventManager could integrate more naturally with DataCollector and the existing scheduler hooks (and potentially be more cache-friendly at scale). I haven't seen this addressed directly in #2526 or #2529 — curious whether you or @tpike3 have a strong intuition either way. Happy to share the extended Wolf–Sheep code if it would be useful as a reference point for what the behavioral abstractions need to handle. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @EwoutH and everyone, I built a needs-based Wolf-Sheep extension to see where Mesa's abstractions help and where they create friction. PR: mesa/mesa-examples#456
The Mesa 4.0a0 Action system (PR #3461) could help — if actions have duration, needs-based agents could start a "forage" that takes 3 ticks instead of completing instantly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Builds on:
Motivation and Goal
The core goal is to provide Mesa users with a foundational framework for modeling agent behavior that bridges theoretical behavioral models with practical implementation needs. We want to abstract essential concepts - how agents perceive and maintain information about themselves and their environment (states), how they select what to do (decisions), and how they execute those choices (actions) - while leaving room for domain-specific extensions. The framework should make it easy to implement simple behaviors while supporting the complexity needed for sophisticated agents, always maintaining clear connections to established behavioral theories.
Conceptual Idea
Complex agent behavior emerges from the interaction between what agents know (their internal states and environmental perception), how they decide what to do (their decision-making mechanisms), and what they can do (their available actions and their effects). These components form a dynamic system where states influence decisions, decisions trigger actions, and actions modify states. By providing a flexible framework for these core components, we can support various behavioral theories while maintaining consistency and reducing implementation overhead.
Potential Behavioral Frameworks/Theories
Belief-Desire-Intention (BDI)
Rooted in human practical reasoning, BDI models agents as having beliefs about their world, desires they want to achieve, and intentions they commit to pursuing. This creates a natural flow from knowledge to goals to actions, making it particularly suitable for modeling rational agents like humans in social systems or autonomous robots. The framework excels at representing deliberative decision-making but requires careful balance between reactivity and commitment to intentions.
Needs-Based Architecture
Based on psychological theories of human motivation, this framework models behavior as arising from the drive to satisfy various needs. Agents maintain multiple need levels (e.g., hunger, safety, social connection) that change over time and through interactions. Actions are selected based on which needs are most pressing, creating naturally emerging behavior patterns. This approach works well for modeling entities with competing internal drives, from animals to human consumers.
State-Action-Reward-State-Action (SARSA)
A learning-based framework where agents learn optimal behaviors through experience. Agents select actions based on their current state, observe the resulting new state and reward, and update their behavior accordingly. This creates agents that can adapt to their environment and learn effective strategies over time. Particularly useful for modeling systems where optimal behavior isn't known beforehand but can emerge through interaction.
Stimulus-Response Patterns
A straightforward framework based on behavioral psychology, where specific environmental stimuli trigger specific responses. While simple, this can create surprisingly complex emergent behavior when multiple patterns interact. This approach is excellent for modeling reactive behavior in ecological systems or basic social interactions, especially when computational efficiency is important.
Motivation-Drive Theory
Similar to needs-based architecture but focused on internal drives that create motivation for specific behaviors. Drives build up over time and are reduced through appropriate actions. This framework is particularly good at modeling cyclical behaviors and competition between different motivations. Works well for modeling both biological systems (eat-rest cycles) and social behaviors (work-leisure balance).
Potential Mesa Components
The foundation for tracking what agents know and believe about themselves and their environment. Handles both discrete states (like "sleeping"/"awake") and continuous variables (like energy levels), maintaining history and managing transitions. The system provides efficient storage and retrieval of state information while supporting both simple direct state access and complex state relationships.
Responsible for selecting what actions an agent should take based on their current states and goals. Provides different decision-making mechanisms (rule-based, utility maximization, learning) that can be swapped out or combined. The system handles priority management and can incorporate both reactive and deliberative decision making.
Manages the execution of actions, handling timing, resources, and outcomes. Actions can have duration, prerequisites, and both immediate and delayed effects. The framework supports action interruption, parallel actions, and action sequences, while managing resource constraints and conflicts.
Represents desired states or conditions that agents want to achieve. Goals can be binary (achieved/not achieved) or continuous (degree of satisfaction), with dynamic priorities. The system handles multiple competing goals and can represent both long-term objectives and immediate needs.
Facilitates communication between components and tracks significant changes in the agent's states, actions, and environment. Provides mechanisms for components to react to changes without tight coupling. This enables both internal agent processes and inter-agent communication.
Component Usage by Framework
Different components can map to different behavioral frameworks. Each framework can be implemented using some or all of these components, but uses them in different ways. For example, BDI uses the state system to track beliefs about the world, while needs-based architectures use it to track need levels. Some frameworks, like Stimulus-Response, don't use all components (noting the '-' for Goals). This mapping demonstrates how our proposed components are flexible enough to implement various behavioral theories while maintaining a consistent architecture.
Beta Was this translation helpful? Give feedback.
All reactions