On the 21st of October, Arsh Chowdhry presented on behalf of our recent MSc graduate Parisa Salmani at the European Conference of Artificial Intelligence (ECAI 2024).
Parisa’s work, entitled Transfer Learning Can Introduce Bias was accepted as a full paper in Frontiers in Artificial Intelligence and Applications, and presented via a short talk and poster; see photos. Parisa’s research demonstrates, for the first time, that transfer learning can lead to the introduction of new demographic bias, which may discriminate against particular sub-populations, when compared to a model trained from scratch to perform the same task, a huge contribution!
On Friday the 20th of September, Zahra Atf presented research entitled Evaluating the Trustworthiness of User-Generated Content on Social Media at the IEEE International Symposium on Technology and Society (ISTAS 2024). The theme of this years ISTAS was on the Social Implications of Artificial Intelligence (AI), Zahra’s work, co-authored by Peter and Nathan, examines the psychological and content-based factors that affect the trustworthiness of UGC on a food brand’s Instagram page, based on data analysis spanning seven years.
From the 16th-20th of September, members of the Trustworthy AI Lab attended the 5th IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS2024).
On Thursday, Dr Lewis was joined on the Expert Panel by Dr Amanda Muller, Chief of Responsible Technology at Northrop Grumman, and Dr Jeremy Pitt, Professor of Intelligent and Self-Organising Systems at Imperial College London, to discuss ‘Building trust in self* systems’. Trust had been a theme running through the conference, and this Expert Panel explored the theory and practice of trust and trustworthiness, between humans and machines, and between machines. A key theme being `trust calibration’, how to empower people to make well-informed judgements about the trustworthiness or otherwise of complex Ai and other socio-technical systems.
Congratulations to Arsh Chowdhry on successfully defending his master’s thesis today!
In his research, Arsh developed a novel method for training classification models using multi-objective optimization, such that the resulting models can be simultaneously accurate and fair. He demonstrated that in many cases, taking account of fairness explicitly during the training process through multi-objective optimization means that high accuracy can be achieved at the same time as fairness - something that often doesn’t occur using traditional training methods. In other cases, the approach reveals a trade-off between accuracy and fairness, meaning that decision-makers can choose how to balance the competing priorities of prediction accuracy and different types of fairness when selecting which model to deploy and in a way that is specific to their context.