Welcome to the

Trustworthy AI Lab

at Ontario Tech University

The Trustworthy Artificial Intelligence Lab, at Ontario Tech University, is the research lab led by Canada Research Chair, Peter Lewis.

We are an interdisciplinary lab in the Faculty of Business and Information Technology, exploring how to make the relationship between AI and society work better.

Embedding AI in society presents a complex mix of technical and social challenges, not the least of which is: as more decisions are delegated to AI systems that we cannot fully verify, understand, or control, when do people trust them?

Our approach is to work towards empowering people to make good trust decisions about intelligent machines of different sorts, in different contexts. How can we conceive of and build intelligent machines that people find justifiably worthy of their trust?

Our work draws on extensive experience in leading AI adoption projects in commercial and non-profit organizations across several sectors, as well as faculty research expertise in artificial intelligence, artificial life, trust, and computational self-awareness.

A major aim is to tackle the challenge of building intelligent machines that are reflective and socially sensitive. By doing this, we aim to build machines with the social intelligence required to act in more trustworthy ways, and the self-awareness to reason about and communicate their own trustworthiness.

News

Peter and Steve invited to attend the sixth Trusting Intelligent Machines workshop
Peter and Steve invited to attend the sixth Trusting Intelligent Machines workshop

Peter and Steve were both invited to attend the sixth Trusting Intelligent Machines workshop, hosted at Schloss Rauischholzhausen in Germany. The workshop included experts on trust, explainability, and artificial intelligence from academia and instrustry across Germany, the UK, Finland, and Canada. The theme of the workshop was exploring the personal and social consequences of the widespread outsourcing of human cognition to AI systems.

Stavros and Peter present at AAMAS
Stavros and Peter present at AAMAS

This week Peter and Stavros traveled to Detroit for the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

There, Peter presented co-authored work with Zahra, entitled Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs at the 7th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (EXTRAAMAS).

Meet the Team

Faculty

Avatar

Peter R. Lewis

Canada Research Chair in Trustworthy Artificial Intelligence
& Lab Director

Avatar

Stephen Marsh

Professor in
Trust Systems
& Lab Co-Director

Avatar

Tosan Atele-Williams

Academic Associate

Avatar

Stephen Jackson

Associate Professor

Avatar

Mahadeo Sukhai

Adjunct Professor
& COO of IDEA-STEM

Postdoctoral Research Fellows

Avatar

Joelma Peixoto

Postdoctoral Researcher

Avatar

Stanard Pachong

Postdoctoral Researcher

Graduate Students

Avatar

Andrew Putman

PhD Student

Avatar

Nathan Lloyd

PhD Student

Avatar

Ainaz Alavi

Master’s Student

Avatar

John Mills

PhD Student

Avatar

Tess Bulter-Ulrich

Doctor of Education Student

Avatar

Shaijieni Kannan

Master’s Student

Research Assistants and Associates

Avatar

Parisa Salmani

Research Associate

Avatar

Talia Silverman

Research Assistant

Affiliates and Honorary Members

Avatar

Karthik Sankaranarayanan

Faculty Affiliate
Professor, Amrita Vishwa Vidyapeetham, India

Avatar

Zahra Atf

Honorary Member

Avatar

Ştefan Sarkadi

Faculty Affiliate, Proleptic Lecturer, King’s College London, UK

Avatar

Aishwaryaprajna

Faculty Affiliate
Lecturer, University of Exeter, UK

Alumni

Avatar

Arsh Chowdhry

Master’s Student

Avatar

Nicholas Lee

Developer

Avatar

Stavros Anagnou

Visiting Researcher

Avatar

Chukwunonso Henry Nwokoye

Postdoctoral Researcher

Avatar

Zahra Ahmadi

Master’s Student

Avatar

Aditya Ravi

Developer

Avatar

Shahrbanoo Zomorodzadeh

Master’s Student

Avatar

Narayan Kabra

Visiting Researcher

Avatar

Tala Defo

Master’s Student

Recent Publications

(2025). Initial validity and reliability testing of the SGBA-5. PloS one, 20, 5, Public Library of Science San Francisco, CA USA.

Cite DOI

(2025). Exploring Accessible Explainable AI: Promising Avenues. Journal on Technology and Persons with Disabilities, 13, 350-366, California State University, Northridge.

Cite Article

(2025). Why Was I Sanctioned?. Proceedings of 1st Workshop on Advancing Artificial Intelligence through Theory of Mind (pp.154-158). arXiv.

Cite DOI

Opportunities

Work or Study with Us!

We often have opportunities to join us, typically for PhD or MSc research, as a postdoctoral researcher, or as a software developer.

For a list of current opportunities, please visit the Opportunities page.