Trustworthy AI Lab
Trustworthy AI Lab
Welcome
News
People
Publications
Opportunities
Workshop-Paper
Rule-Based Moral Principles for Explaining Uncertainty in Natural Language Generation
As large language models (LLMs) are increasingly used in high-stakes applications, the challenge of explaining uncertainty in natural …
Zahra Atf
,
Peter R. Lewis
Cite
DOI
Multi-Perspective Explanations for Multi-Agent Systems
In this paper, we present early work exploring two essential socio-cognitive capacities for intelligent social behavior: …
Nathan Lloyd
,
Peter R. Lewis
Cite
DOI
Self-Evaluation can Help Agents Meet Social Expectations
As artificial intelligence (AI) and multi-agent systems (MAS) become increasingly advanced and integrated into real-world applications, …
Parisa Salmani
,
Peter R. Lewis
Cite
DOI
The Institution Bootstrapping Problem and Some Counter-Intuitive Solutions
Institutions are rule systems that play a critical role in enabling communities to manage common-pool resources (e.g., grazing lands, …
Stavros Anagnou
,
Christoph Salge
,
Peter R. Lewis
Cite
DOI
Think Before You Act: Popperian Expectations for Adaptive Agents
Autonomous agents that operate and learn in complex environments must continually balance multiple objectives, including efficiency and …
John Mills
,
Peter R. Lewis
Cite
DOI
Thinking Faster and Slower: An Agent’s Cognitive Repertoire
This work examines the limitations of artificial intelligence inspired by dual-process models—those that sharply divide cognition into …
Marieke Van Otterdijk
,
Nathan Lloyd
,
Peter R. Lewis
Cite
Article
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
As AI systems are increasingly deployed to support decision-making in critical domains, explainability has become a means to enhance …
Joelma Peixoto
,
Akriti Pandey
,
Ahsan Zaman
,
Peter R. Lewis
Cite
DOI
Workshop Proceedings
Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs
This paper investigates how prompt engineering techniques impact both accuracy and confidence elicitation in Large Language Models …
Nariman Naderi
,
Zahra Atf
,
Peter R Lewis
,
Aref Mahjoub Far
,
Seyed Amir Ahmad Safavi-Naini
,
Ali Soroush
Cite
DOI
Why Was I Sanctioned?
This paper investigates how perspective-taking shapes the formation and validation of normative expectations. It explores how adopting …
Nathan Lloyd
,
Peter R. Lewis
Cite
DOI
Adding Reflective Governance to LLMs
With the rapid advancement of artificial intelligence (AI) systems and large language models (LLMs), these technologies are …
Parisa Salmani
,
Peter R. Lewis
Cite
Article
»
Cite
×