Two posters/demos accepted for HCOMP 2019

The Crowd Lab had two posters/demos accepted for AAAI HCOMP 2019! Both of these papers involved substantial contributions from our summer REU interns, who will be attending the conference at Skamania Lodge, Washington, to present their work.

Crowd Lab interns Sarwat Kazmi (left) and Efua Akonor (right) presenting their poster at HCOMP 2019.

It’s QuizTime: A study of online verification practices on Twitter was led by Crowd Lab Ph.D. student Sukrit Venkatagiri, with co-authors Jacob Thebault-Spieker, Sarwat Kazmi, and Efua Akonor. Sarwat and Efua were summer REU interns in the Crowd Lab from the University of Maryland and Wellesley College, respectively. The abstract for the poster is:

Misinformation poses a threat to public health, safety, and democracy. Training novices to debunk visual misinformation with image verification techniques has shown promise, yet little is known about how novices do so in the wild, and what methods prove effective. Thus, we studied 225 verification challenges posted by experts on Twitter over one year with the aim of improving novices’ skills. We collected, annotated, and analyzed these challenges and over 3,100 replies by 304 unique participants. We find that novices employ multiple tools and approaches, and techniques like collaboration and reverse image search significantly improve performance.

Crowd Lab intern David Mitchell presenting his demo and poster at HCOMP 2019.

PairWise: Mitigating political bias in crowdsourced content moderation was led by Crowd Lab postdoc Jacob Thebault-Spieker, with co-authors Sukrit Venkatagiri, David Mitchell, and Chris Hurt. David was a summer REU intern from the University of Illinois, and Chris was a Virginia Tech undergraduate. The abstract for the demo is:

Crowdsourced labeling of political social media content is an area of increasing interest, due to the contextual nature of political content. However, there are substantial risks of human biases causing data to be labelled incorrectly, possibly advantaging certain political groups over others. Inspired by the social computing theory of social translucence and findings from social psychology, we built PairWise, a system designed to facilitate interpersonal accountability and help mitigate biases in political content labelling.

Paper accepted for HCOMP 2019

HCOMP 2019 logo

The Crowd Lab had a paper, titled, “Second Opinion: Supporting last-mile person identification with crowdsourcing and face recognition,” accepted for the upcoming AAAI Human Computation and Crowdsourcing (HCOMP 2019) conference at the Skamania Lodge in Stevenson, WA, USA, October 28-30, 2019. The conference had a 25% acceptance rate.

Ph.D. student and lead author Vikram Mohanty will present the paper, co-authored with Dr. Luther and Crowd Lab undergraduate researchers Kareem Abdol-Hamid and Courtney Ebersohl. Here’s the paper’s abstract:

As AI-based face recognition technologies are increasingly adopted for high-stakes applications like locating suspected criminals, public concerns about the accuracy of these technologies have grown as well. These technologies often present a human expert with a shortlist of high-confidence candidate faces from which the expert must select correct match(es) while avoiding false positives, which we term the “last-mile problem.” We propose Second Opinion, a web-based software tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, seed-gather-analyze, to assist experts in solving the last-mile problem. We evaluated Second Opinion with a mixed-methods lab study involving 10 experts and 300 crowd workers who collaborate to identify people in historical photos. We found that crowds can eliminate 75% of false positives from the highest-confidence candidates suggested by face recognition, and that experts were enthusiastic about using Second Opinion in their work. We also discuss broader implications for crowd–AI interaction and crowdsourced person identification.