Two papers accepted for CSCW 2019

CSCW 2019 logo

The Crowd Lab had two papers accepted for the upcoming ACM Computer Supported Cooperative Work and Social Computing (CSCW 2019) conference in Austin, TX, USA, November 9-13, 2019. The conference had a 31% acceptance rate.

Ph.D. student Sukrit Venkatagiri will be presenting “GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations,” co-authored with Jacob Thebault-Spieker, Rachel Kohler, John Purviance, Rifat Sabbir Mansur, and Kurt Luther, all from Virginia Tech. Here’s the paper’s abstract:

Expert investigators bring advanced skills and deep experience to analyze visual evidence, but they face limits on their time and attention. In contrast, crowds of novices can be highly scalable and parallelizable, but lack expertise. In this paper, we introduce the concept of shared representations for crowd–augmented expert work, focusing on the complex sensemaking task of image geolocation performed by professional journalists and human rights investigators. We built GroundTruth, an online system that uses three shared representations—a diagram, grid, and heatmap—to allow experts to work with crowds in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images, and revealed challenges and success strategies for expert–crowd interaction. We also discuss designing shared representations for visual search, sensemaking, and beyond.

Ph.D. student Tianyi Li will be presenting “Dropping the baton? Understanding errors and bottlenecks in a crowdsourced sensemaking pipeline,” co-authored with Chandler J. Manns, Chris North, and Kurt Luther, also from VT. Here’s the abstract:

Crowdsourced sensemaking has shown great potential for enabling scalable analysis of complex data sets, from planning trips, to designing products, to solving crimes. Yet, most crowd sensemaking approaches still require expert intervention because of worker errors and bottlenecks that would otherwise harm the output quality. Mitigating these errors and bottlenecks would significantly reduce the burden on experts, yet little is known about the types of mistakes crowds make with sensemaking micro-tasks and how they propagate in the sensemaking loop. In this paper, we conduct a series of studies with 325 crowd workers using a crowd sensemaking pipeline to solve a fictional terrorist plot, focusing on understanding why errors and bottlenecks happen and how they propagate. We classify types of crowd errors and show how the amount and quality of input data influence worker performance. We conclude by suggesting design recommendations for integrated crowdsourcing systems and speculating how a complementary top-down path of the pipeline could refine crowd analyses.

Congratulations to Sukrit, Tianyi, and their collaborators!