Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt to Optimize Wolf-Sheep Performance #2723

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

gadhvirushiraj
Copy link

@gadhvirushiraj gadhvirushiraj commented Mar 16, 2025

Summary

Attempt at optimizing Sheep Agent movement using Mesa's PropertyLayer and vectorized operations to determine the best moves.

Motive

The current movement logic in the wolf-sheep model performs redundant checks when agents search their neighborhood for valid moves. This optimization builds on changes from #2503.

Implementation

Two PropertyLayer: sheep_pos and grass_pos are added and updated. The agent determines the best move by extracting a subsection (neighbor radius - 1) of the PropertyLayer and applying vectorized boolean operations. Refer Issue #2565.

However, performance drops for Wolf-Sheep (large), likely because the overhead of updating a larger grid offsets the benefits. A larger grid with higher agent density maybe yield better results, I think (not sure).

Usage Examples

--

Additional Notes

I tested multiple variations of this approach. Full-layer vector operations proved expensive compared to the original get_neighborhood method.

An, imporvment can be a better way to update grass_pos (grass tracking layer) and updating wolf_pos directly from Agent's move(). Edit : Implemented

Verified

This commit was signed with the committer’s verified signature.
poscat0x04 Poscat
Copy link

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔵 -1.0% [-1.5%, -0.5%] 🔵 +0.3% [+0.2%, +0.5%]
BoltzmannWealth large 🔵 -2.6% [-3.4%, -1.8%] 🔵 -1.5% [-2.2%, -0.7%]
Schelling small 🔵 -2.1% [-2.3%, -1.9%] 🔵 -0.2% [-0.3%, -0.1%]
Schelling large 🔵 -0.4% [-9.0%, +8.7%] 🔵 +1.1% [-0.4%, +2.5%]
WolfSheep small 🔵 -1.4% [-1.9%, -0.9%] 🔵 -2.9% [-9.0%, +4.0%]
WolfSheep large 🔵 -0.5% [-0.9%, -0.2%] 🔴 +40.5% [+39.1%, +42.2%]
BoidFlockers small 🔵 +0.8% [+0.1%, +1.4%] 🔵 -0.5% [-0.7%, -0.3%]
BoidFlockers large 🔵 -0.1% [-0.7%, +0.5%] 🔵 +0.1% [-0.1%, +0.4%]

@gadhvirushiraj gadhvirushiraj marked this pull request as ready for review March 20, 2025 14:12
Copy link

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔴 +5.9% [+4.6%, +7.1%] 🔵 +0.2% [-0.1%, +0.4%]
BoltzmannWealth large 🔵 +17.8% [-0.7%, +53.8%] 🔵 +3.4% [+0.0%, +6.9%]
Schelling small 🔵 +0.5% [+0.2%, +0.7%] 🔵 +0.4% [+0.3%, +0.6%]
Schelling large 🔵 +4.8% [+0.9%, +11.3%] 🔵 +4.6% [+2.1%, +7.5%]
WolfSheep small 🔴 +5.2% [+4.6%, +5.7%] 🔴 +163.2% [+144.1%, +182.2%]
WolfSheep large 🔴 +4.7% [+3.6%, +5.7%] 🔴 +153.9% [+148.5%, +159.1%]
BoidFlockers small 🔵 +0.2% [-0.6%, +1.2%] 🔵 -1.0% [-1.1%, -0.8%]
BoidFlockers large 🔵 -1.4% [-1.9%, -0.9%] 🔵 -0.9% [-1.1%, -0.7%]

@EwoutH
Copy link
Member

EwoutH commented Mar 22, 2025

@gadhvirushiraj Thanks a lot for your PR! Our benchmarks currently show a performance decrease, both the small and large size. What would you like us to do for this PR?

@gadhvirushiraj
Copy link
Author

Hello @EwoutH,

I've been digging into the recent performance drop and noticed that it doesn't seem from updating positions in PropertyLayer, but rather from the operations within move(). Am I overlooking something here, is there a better way?

I'm open to exploring alternative approaches if you have any suggestions. This PR has been a good learning opportunity for me in getting to grips with Mesa's modeling :) Thanks

@EwoutH
Copy link
Member

EwoutH commented Mar 23, 2025

I would start with profiling both the old and new implementation, to see where compute time is spend in both.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants