The Trustworthy Artificial Intelligence Lab, at Ontario Tech University, is the research lab led by Canada Research Chair, Peter Lewis.
We are an interdisciplinary lab in the Faculty of Business and Information Technology, exploring how to make the relationship between AI and society work better.
Embedding AI in society presents a complex mix of technical and social challenges, not the least of which is: as more decisions are delegated to AI systems that we cannot fully verify, understand, or control, when do people trust them?
Our approach is to work towards empowering people to make good trust decisions about intelligent machines of different sorts, in different contexts. How can we conceive of and build intelligent machines that people find justifiably worthy of their trust?
Our work draws on extensive experience in leading AI adoption projects in commercial and non-profit organizations across several sectors, as well as faculty research expertise in artificial intelligence, artificial life, trust, and computational self-awareness.
A major aim is to tackle the challenge of building intelligent machines that are reflective and socially sensitive. By doing this, we aim to build machines with the social intelligence required to act in more trustworthy ways, and the self-awareness to reason about and communicate their own trustworthiness.
A huge congratulations to our MSc Computer Science graduates, Arsh and Parisa, and to our BSc (Hons) in Computer Science graduate Nick. We’re all so proud of your achievements and we’re very thankful to share in your celebrations!
Peter and Steve were both invited to attend the sixth Trusting Intelligent Machines workshop, hosted at Schloss Rauischholzhausen in Germany. The workshop included experts on trust, explainability, and artificial intelligence from academia and instrustry across Germany, the UK, Finland, and Canada. The theme of the workshop was exploring the personal and social consequences of the widespread outsourcing of human cognition to AI systems.
This week Peter and Stavros traveled to Detroit for the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
There, Peter presented co-authored work with Zahra, entitled Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs at the 7th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (EXTRAAMAS).
Work or Study with Us!
We often have opportunities to join us, typically for PhD or MSc research, as a postdoctoral researcher, or as a software developer.
For a list of current opportunities, please visit the Opportunities page.