Brian Parsons
 

Multi AI Agent Panel


 
 
 

 

Overview & Approach

Role: Senior UX Researcher & Strategist
Company: PlayStation
Team: Independent project collaboration with Stanford AI researcher
Timeline: ~6 Weeks
Budget: Internal pilot (no allocated funding)

UX Designers aim to make products everyone can enjoy — but without direct feedback from people with disabilities, inclusive design becomes guesswork. Compliance standards like WCAG and EAA are vast and complex, and limited research budgets often prevent early-stage testing.

How can we get our UX Designers to receive feedback about their product ideas at the earliest stages of the design process to make their products both as compliant and as inclusive as possible?

 

 

Inspiration Through Experimentation: Learning from AI Communities

During my coursework in “UI/UX Design for AI Products”, I studied a Stanford experiment exploring how various AI agents could interact with one another as though they were members of a small digital community. These agents demonstrated emergent social behaviors—collaborating, sharing information, and even forming “relationships” that mimicked human dynamics.

This experiment inspired a new approach to inclusive design: if AI agents could simulate social interaction, perhaps they could also simulate different user perspectives. I envisioned creating a panel of AI-driven personas, each representing a distinct disability, capable of evaluating digital products and providing feedback as real accessibility consultants might.

Visualization of the Stanford experiment on social Generative AI Agents

 

 

Simulating Inclusive Feedback with AI

Insight: UX Designers needed an accessible, low-cost way to test early design concepts for inclusivity without relying on time-intensive research panels.

Action:

  • Conceptualized an AI-powered feedback system inspired by Stanford’s “social generative agents” experiment

  • Collaborated with a Stanford AI researcher to define parameters and interaction logic

Result:

  • Produced synthesized research reports detailing accessibility challenges across gaming products

 

 

Building the MVP in CrewAI

Insight: Design feedback needed to go beyond compliance — it had to reflect real-world experiences of players with disabilities.

Action:

  • Coded AI personas modeled after real gaming accessibility consultants I had previously worked with

  • Built an additional UX Research Agent to aggregate findings into a structured, human-readable report

  • Created a functional prototype using CrewAI, connecting to Serper for large-scale data retrieval and ChatGPT for analysis

Result:

  • Generated a detailed report identifying the top 20 accessibility issues faced by gamers with disabilities

  • Validated that the model could simulate human-like panel feedback and surface relevant accessibility concerns

Creating the tasks each agent needed to perform in CrewAI.


 

Report generated in Cursor

Iterating on Context and Actionability

Insight: While accurate, initial reports lacked actionable design feedback and contextual understanding of designers’ goals.

Action:

  • Analyzed AI outputs and identified gaps in specificity and contextual awareness

  • Documented limitations of multi-agent systems in replicating nuanced human reasoning

Result:

  • Pivoted focus toward developing in-house accessibility design guidelines that merged WCAG and EAA standards with product-specific context

  • Created tailored guidance for web, mobile, console, and controller experiences — empowering designers with clear, actionable insights

 

 

Reframing the Outcome

Insight: Failed experiments can spark valuable design insight and organizational value.

Action:

  • Communicated findings transparently to leadership to prevent any investments in underdeveloped AI tools

  • Repurposed the experiment’s technical foundation for rapid desk research automation, possibly saving researchers hours of manual work

Result:

  • Demonstrated the potential of multi-agent AI systems for structured, scalable research support

  • Reinforced the irreplaceable value of human UX researchers in interpreting nuance, intent, and business context

 

 

Results at a Glance

                                                                                         
AspectBeforeAfter
Design EvaluationLimited to internal teams perspective Multi-agent AI system simulating diverse user feedback
Accessibility InsightReactive to legal or testing feedbackProactive identification of accessibility barriers via AI agents
Research EfficiencyManual user testing onlyAutomated scenario testing with persona-specific agents
Innovation ScopeSingle-lens UX evaluationMulti-perspective approach inspired by inclusive design panels
Organizational ImpactConceptual prototypeFramework adopted for future inclusive AI research initiatives
 

 

Key Skills Demonstrated

  • Emergent Technology Exploration: Rapidly learned and applied new AI frameworks to UX research workflows.

  • Human-Centered Systems Thinking: Framed technical innovation through accessibility and inclusivity principles.

  • Resilient Experimentation: Translated early failure into organizational learning and strategic redirection.

  • Cross-Disciplinary Collaboration: Partnered with AI specialists to merge design empathy with technical rigor.

  • Synthesis & Communication: Transformed complex accessibility regulations into clear, actionable design guidance.