In this project, we conducted an in-depth manual analysis, a large-scale linguistic analysis, and a user study to empirically study the characteristics of ChatGPT’s answers to programming questions.
In this project, we developed an interactive tool to identify biases and stereotypes in pre-trained word embeddings. Apart from identifying biases, this tool also gives users the flexibility to go back to the source of biases in the training data and interactively debug the source of biases.
In this project, we built an explainable interface for activity recognition deep learning algorithm. The interface lets end-users query the video and also help users to understand how and why the system assigned a particular activity to a frame.
In this project, we investigated how including explanations and their level of meaningfulness would influence users’ trust and perceptions of accuracy. We studied these issues by conducting a controlled experiment using a localized explanation approach in the context of image classification.
In this project, we experimented with various visual aids to assist users in navigating non-human scale data visualization (Microvascular Network of a Mouse Brain in VR). A controlled user study revealed how users experience and interpret these visual aids.
This work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values.