In CS 197, I had the opportunity to be part of student-led Human-Computer Interaction research. In this project, I contributed to the literature review, the study design, execution of the study on Amazon Mechanical Turk, as well as the writing of the paper.
From our abstract:
Artificial Intelligence (AI) is increasingly augmenting and generating online content, but research suggests that users distrust content which they believe to be AI-generated. In this paper, we study whether introducing a confidence indicator, a text rating of an algorithm's confidence in its source data alongside rationale for why the data is more or less trustworthy, affects this distrust in Airbnb host profiles believed to be computer-generated. Our results indicate that a low-confidence indicator decreases participant trust in the rental host, but high-confidence indicators have no significant impact on trust. These findings suggest that user trust of AI-generated content can be negatively, but not positively, affected by a confidence indicator.
We went on to actually have our research accepted and published in CHI 2020's Late-Breaking Works! The Conference on Human Factors in Computing Systems is the most prestigious Human-Computer Interaction conference, where researchers from all over the world gather to learn from others and share their work in the field. We were honored to have been accepted, and couldn't have done it without the guidance and existing work of Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T. Hancock, and Mor Naaman.