Photo Sleuth: Identifying Historical Portraits with Crowdsourcing and Computer Vision
This NSF-funded project, a collaboration between Computer Science, History, and Military Images magazine, seeks to recover the lost identities of portraits of the American Civil War generation. The popularity of photography exploded in the United States during the 1860s as Americans went off to war. Today, 150 years later, thousands of these photos have survived, but relatively few are identified to particular individuals. We are developing new techniques combining crowdsourcing and computer vision, including face recognition technology, to piece together visual clues from photographs and research from historical sources to solve these mysteries. Visit CivilWarPhotoSleuth.com for details.
V. Mohanty, D. Thames, S. Mehta, and K. Luther. Photo Sleuth: Combining Human Expertise and Face Recognition to Identify Historical Portraits. ACM Conference on Intelligent User Interfaces (IUI 2019), Marina del Ray, CA, USA, 2019. (Best Paper Award) (25% acceptance rate)
V. Mohanty, D. Thames, and K. Luther. Photo Sleuth: Combining Collective Intelligence and Computer Vision to Identify Historical Portraits. ACM Conference on Collective Intelligence (CI 2018), Zurich, Switzerland, 2018. (32% acceptance rate for oral presentations)
V. Mohanty, D. Thames, and K. Luther. Are 1,000 Features Worth A Picture? Combining Crowdsourcing and Face Recognition to Identify Civil War Soldiers. AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018), Zurich, Switzerland, 2018. (Best Poster/Demo Award)
GroundTruth: Supporting Image Geolocation with Crowdsourcing and Diagramming
This NSF-funded project investigates how crowdsourcing can be used to identify or verify the specific geographic location where photos were taken. For example, a journalist may need to verify photos posted on social media for a new story, an intelligence analyst may need to track down the location of a terrorist training camp, or a museum archivist may need to identify historical photos in their collections. We are studying how novices and experts perform image geolocation tasks in order to understand the mental models they create, the image clues they use, and the challenges they face. We are also developing new software tools that leverage crowdsourcing and expert diagrams to support faster, more accurate image geolocation in a variety of domains.
R. Kohler, J. Purviance, and K. Luther. Supporting Image Geolocation with Diagramming and Crowdsourcing. AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2017), Québec City, Canada, 2017. (Notable Paper Award) (29% acceptance rate) (Blog post)
R. Kohler, J. Purviance, and K. Luther. GroundTruth: Bringing Together Experts and Crowds for Image Geolocation. HCOMP 2017 GroupSight Workshop on Human Computation for Image and Video Analysis, Québec City, Canada, 2017. (Video)
R. Kohler and K. Luther. Crowdsourced Image Geolocation as Collective Intelligence. Collective Intelligence 2017, New York, N.Y., USA, 2017.
S. Mehta, C. North, and K. Luther. An Exploratory Study of Human Performance in Image Geolocation Tasks. HCOMP 2016 GroupSight Workshop on Human Computation for Image and Video Analysis, Austin, Tx., USA, 2016.
PairWise: Supporting Trustworthy Gig Work with Crowdsourced Observation
This NSF-funded project investigates how crowdsourced observation can be used to support trustworthy work in online work settings. For example, content moderation systems need crowd workers to make unbiased moderation decisions, or citizen science systems need to ensure a that citizen scientists follow a trustworthy scientific protocol. We are studying how having a real-time observer affects and changes online work behavior, the feasibility of this approach, and the potential to support less biased and more trusthworthy online work. We are also developing new software tools that enable this kind of interaction, as well as optimize the process (e.g., through opportunistically focusing a real-time observer on certain types of work).
CrowdIA: Supporting Intelligence Analysis with Crowdsourcing and Context Slices
This NSF-funded project explores how crowdsourcing can be used to help an intelligence analyst find connections within a large body of text-based evidence. For example, an analyst may have access to dozens of evidence documents, and needs to identify a hidden terrorist plot that links the evidence together. We have developed the concept of “context slices,” in which we intelligently divide up large amounts of text so that transient, novice crowd workers can contribute to solving the bigger mystery. From these ideas, we have developed CrowdIA, a web-based system that supports solving mysteries in text documents using crowdsourced sensemaking.
T. Li, K. Luther, and C. North. CrowdIA: Solving Mysteries with Crowdsourced Sensemaking. Proceedings of the ACM on Human-Computer Interaction, 2 (CSCW), 2018. (26% acceptance rate)
T. Li, A. Shah, K. Luther, and C. North. Crowdsourcing Intelligence Analysis with Context Slices. CHI 2018 Workshop on Sensemaking in a Senseless World, Montréal, Canada, 2018. (21% acceptance rate for full presentations)
CrowdLayout: Improving Biological Graph Visualizations with Crowdsourcing
This NIH-funded project investigates how crowdsourced design can be used to improve visualizations of biological graph data. Many types of life sciences research generate graph data, but meaningful visualizations are hard to achieve with automatic graph layout algorithms, and time-consuming for experts to create manually. We have developed CrowdLayout (formerly GraphCrowd), a system that allows novice crowd workers to design and review biological network layouts as effectively as experts. CrowdLayout is an extension of GraphSpace, an online hub developed at Virginia Tech where researchers can easily share and visualize biological network data.
A. Bharadwaj, D. Gwizdala, Y. Kim, K. Luther, and T.M. Murali. Flud: a hybrid crowd-algorithm approach for visualizing biological networks. CHI 2019 Workshop on Where is the Human? Bridging the Gap Between AI and HCI, Glasgow, UK, 2019.
D.P. Singh, L. Lisle, T.M. Murali, and K. Luther. CrowdLayout: Crowdsourced Design and Evaluation of Biological Network Visualizations. ACM Conference on Human Factors in Computing Systems (CHI 2018), Montréal, Canada, 2018. (26% acceptance rate)
A. Bharadwaj, D.P. Singh, A. Ritz, A.N. Tegge, C.L. Poirel, P. Kraikivski, N. Adames, K. Luther, S.D. Kale, J. Peccoud, J.J. Tyson, and T.M. Murali. GraphSpace: Stimulating Interdisciplinary Collaborations in Network Biology. Bioinformatics, 33(19), 2017. (7.307 impact factor)
Incite: Supporting Historical Research and Education with Crowdsourcing
This interdisciplinary collaboration between Computer Science, History, and Education investigates how a crowdsourcing system can support historical scholarship while helping crowd workers learn about history. We created Incite, a system that combines crowdsourcing with natural language processing techniques to help expert scholars find relevant primary sources in an digital archive. Incite was developed as a plug-in for the open-source Omeka content management system used by many museums and galleries. We have deployed Incite in high school and college classrooms across the US as part of two digital humanities projects, Mapping the Fourth of July in the Civil War Era (funded by the National Archives), and The American Soldier in World War II (funded by the NEH).
N.-C. Wang, D. Hicks, and K. Luther. Exploring Trade-Offs Between Learning and Productivity in Crowdsourced History. Proceedings of the ACM on Human-Computer Interaction, 2 (CSCW), 2018. (26% acceptance rate)
N.-C. Wang. Crowdnection: Connecting High-level Concepts with Historical Documents via Crowdsourcing. CHI 2016 Student Research Competition, San Jose, Calif., USA, 2016.