BMO Responsible AI Research Awards: Past Recipients

2024

Evaluating and Enhancing Precision Medicine in Quebec: A Capacity Building Roadmap through AI

Goktug Bender,majoring in Psychology

Precision medicine has been an important topic in the future development of healthcare. As the focus of medicine is shifting towards prevention rather than reaction, predictive technologies and AI will gain importance. Addressing the complex task of personalized medicine necessitates the seamless integration and analysis of extensive datasets. The research team will aim to assess the feasibility and create an infrastructure roadmap for implementing precision medicine in Quebec utilizing AI technologies. They intend to illuminate best practices for the integration of AI-driven precision medicine initiatives within the broader healthcare framework, while considering its socio-technical feasibility in the Quebec healthcare context, thereby contributing to the advancement of interdisciplinary research in the realm of AI and society.

Enhancing Assistive Technology for Complex Communication Disorders with GenAI

Leora Klee,majoring in Computer Science

Language is an integral part of self-expression. For people with language disorders (e.g., aphasia, autism), Augmentative and Alternative Communication (AAC) tools can enhance communication and self-expression, enabling them to better convey and exchange thoughts and emotions. Traditional symbol-based AAC devices, however, often fall short of expectations. Their complicated structures impose meta-linguistic and memory demands on the user and the supports offered are generally unable to flexibly adapt to emergent communication needs. Recent advances in generative AI offer new potential for designing assistive tools that facilitate communication. The goal of this research project will be to investigate the capabilities of generative AI for a range of possible communication supports and envisioning how these capabilities might be integrated into AAC interfaces.

Navigating Advice Landscapes: A Comparative Analysis of Human-Human and Human-Chatbot Interactions Utilizing Large Language Models

Sebastian Reinhardt, majoring in Software Engineering

The first aim of this research project is to compare and contrast the quality of advice that is given as a result of human-human interactions compared to the advice obtained through human-chat bot interactions, using Large Language Models (LLMs), such as ChatGPT (OpenAI). Through this, it will become clear whether which advice people prefer and trust more. The second aim of this project is to examine the implicit biases of the LLMs, in terms of the assumptions made, based off the information given in the prompt.

Protecting the Amazon Forest from Illegal Activity through Image Recognition and Operations Research: A Partnership with JungleKeepers and Peruvian Indigenous Communities

Nikki Tye, majoring in Honors International Development

The Amazon rainforest, a crucial global resource, faces unprecedented threats from human activities such as logging, mining, and deforestation for agriculture, exacerbating the climate crisis. Despite the pressing need for coordinated conservation efforts, environmental non-governmental organizations (NGOs) frequently bear the weight of these endeavors. Environmental NGOs collaborate with local Indigenous communities to protect the Amazon rainforest from illegal activity, mainly through patrols and data collection. However, these partnerships face resource constraints that limit their ability to collect data from illegal activity in the forest. This project addresses this challenge by exploring the untapped potential of Artificial Intelligence (AI) to help Indigenous communities and environmental NGOs improve their rainforest patrolling activities.

What is “thoughtfulness” in AI Ethics Audits?

Jocelyn Wong, majoring in Cognitive Science

The increase in regulatory activities in AI has led to a rise in auditing frameworks to hold designers of machine learning systems accountable, namely the increased number of AI ethics consultancies offering audit services. Unlike traditional engineering domains, the AI industry has yet to develop a standard for conducting meaningful and effective audits of ML systems. This results in reports that could provide a false sense of security for those requesting audits, independent of the quality and rigour of the audit conducted for a system or company. This project aims to hold the AI ethics auditors responsible and accelerate the processes of standardising AI ethics audits by investigating what constitutes a “thoughtful” report.

Back to top