The Crowd Lab won the Best Poster/Demo Award at HCOMP 2019! Congratulations to Crowd Lab postdoc and team leader Jacob Thebault-Spieker, Ph.D. student Sukrit Venkatagiri, and undergraduate researchers David Mitchell and Chris Hurt. Their poster/demo was titled, “PairWise: Mitigating political bias in crowdsourced content moderation.”
The Crowd Lab also won the Best Poster/Demo Award at HCOMP 2018.
Dr. Luther and Dr. Sylvester Johnson, director of the Center for Humanities at Virginia Tech, co-presented on “The future of AI and what it means for humans” to local journalists at a media event called “On the Record with Virginia Tech” on October 17, 2019. The press release for the event described it as follows:
Technology is the changing the way we live and work. For centuries, being human has been described by emphasizing the ability to think and reason. But now technology innovation using Artificial Intelligence (AI) can help us mimic human-like behavior to make complicated decisions and solve world problems.
Virginia Tech’s Innovation Campus in Alexandria will focus on the intersection between technology and the human experience, leading the way not just in technical domains but also looking at the policy and ethical implications to ensure that technology doesn’t drive inequity.
What will it mean to be human as intelligent machines continue to advance? How is AI improving our lives? What are the dangers that more powerful AI might bring?
In this talk, Virginia Tech humanities scholar Sylvester Johnson and computer scientist Kurt Luther will share recent discoveries and explore how the latest technological advances in AI are changing our lives.
A video clip of the event was broadcast on a local TV news channel, WDVM.
Dr. Luther gave an invited presentation to an audience of engineers and journalists at The Washington Post on October 23, 2019. The title of his talk was, “Photo sleuthing: Helping investigators solve photo mysteries using crowdsourcing and AI.” The abstract for the talk was:
Journalists, intelligence analysts, and human rights investigators frequently analyze photographs of questionable or unknown provenance, trying to identify the people and places depicted. These photos can provide invaluable leads and evidence, but even experts must invest significant time in each analysis, with no guarantee of success. Collective human intelligence (via crowdsourcing) and artificial intelligence (via computer vision) offer great potential to support expert photo analysis. However, we must first understand how to leverage the complementary strengths of these techniques to support investigators’ real-world needs and work practices.
In this talk, I present my lab’s research with two “photo sleuthing” communities: (1) open-source intelligence (OSINT) analysts who geolocate and verify photos and videos shared on social media, and (2) researchers and collectors who identify unknown soldiers in historical portraits from the 19th century. Informed by qualitative studies of current practice, we developed a novel approach that combines the complementary strengths of expert investigators, novice crowdsourcing, and computer vision to solve photo mysteries. We built two software tools based on this approach, GroundTruth and Photo Sleuth, and evaluated them with real expert investigators.
Historians spend significant time looking for relevant, high-quality primary sources in digitized archives and through web searches. One reason this task is time-consuming is that historians’ research interests are often highly abstract and specialized. These topics are unlikely to be manually indexed and are difficult to identify with automated text analysis techniques. In this article, we investigate the potential of a new crowdsourcing model in which the historian delegates to a novice crowd the task of labeling the relevance of primary sources with respect to her unique research interests. The model employs a novel crowd workflow, Read-Agree-Predict (RAP), that allows novice crowd workers to label relevance as well as expert historians. As a useful byproduct, RAP also reveals and prioritizes crowd confusions as targeted learning opportunities. We demonstrate the value of our model with two experiments with paid crowd workers (n=170), with the future goal of extending our work to classroom students and public history interventions. We also discuss broader implications for historical research and education.