Bridging HCI, NLP, and policymaking to explore how LLM agent simulations can become genuinely useful tools for policy.
Submit Your WorkLarge Language Models are rapidly evolving from text generators into reasoning systems that can act as autonomous agents. When placed in social contexts, these agents display emergent behaviors such as forming coalitions, spreading information, and making collective decisions.
Policymaking is fundamentally collective, high-stakes, and uncertain. Unlike laboratory science, it rarely allows for controlled experimentation. LLM agent simulations offer a new kind of in silico testbed, enabling policymakers to explore interventions, stress-test communication strategies, and surface unintended consequences across large, diverse populations before acting in the real world.
Drawing on HCI traditions such as participatory and user-centered design, we argue that the value of these simulations does not come "out of the box." Instead, it emerges through iterative, stakeholder-engaged design—where policymakers build trust, probe system boundaries, and continuously recalibrate expectations.
How can LLM agent simulations move beyond technical demonstrations to become practical tools for policymaking?
How can simulations be designed and interpreted responsibly, ensuring appropriate reliance, transparency, and fairness?
How can simulations and policy processes be developed simultaneously, so that each informs and adapts to the other?
We invite position papers (2–4 pages) or short reports describing case studies, design explorations, methodological insights, or reflections on using LLM agent simulation for policy. Encore submissions of relevant published work are welcome.
2–4 page position papers or short reports following the ACM template. Papers will be reviewed for relevance and diversity of perspectives.
All accepted papers will be published on the workshop website and in the proceedings with CEUR-WS. Selected authors will be invited to extend their work for established venues.
At least one author of each accepted paper must register for and attend the workshop.
Microsoft Research AI Frontiers
As AI agents move from personal assistants to participants in shared digital marketplaces, questions of safety become questions of social reasoning: when should an agent act, when should it pause, and whose interests are affected. This talk centers on Magentic Marketplace, a simulation environment for studying how agents behave when they interact under shared constraints, incentives, and competition in marketplaces. I contextualize this work relative to prior research on personal computer‑use agents, including Magentic‑UI and From Interaction to Impact, which shows the importance of reasoning about context, consent, and irreversible actions. Magentic Marketplace extends these concerns to societies of agents, revealing how incentives and coordination can amplify risk and produce failure modes that do not appear at the individual level. Together, this work shifts attention from task success to socially aware agent behavior, framing responsible AI deployment as a problem of social reasoning rather than capability alone.
Amanda (she/her) is a researcher at Microsoft Research AI Frontiers where she conducts research at the intersection of AI and Human-Computer Interaction, building agentic workflows and experiences. Prior to Microsoft, Amanda researched and productionized technologies for computational UI understanding, deploying these technologies into multiple widely used Accessibility features. Amanda earned her Ph.D. from the University of Washington, where she was advised by Amy Ko and James Fogarty. Her research focused on developing novel interfaces for UX/UI designers that leveraged AI and program analysis. Amanda also spent three years prior to grad school as a Software Engineer in the Microsoft Dynamics group.
February 20, 2026
Anywhere on Earth (AoE)
March 19, 2026
Anywhere on Earth (AoE)
Thursday, April 16, 202614:15 - 18:00 CEST
Barcelona, SpainCentre de Convencions Internacional de Barcelona: P1 - Room 111
Contact: polisim.workshop@gmail.com