Dr. Luther gave an invited presentation on Civil War Photo Sleuth at the grand opening celebrations of the American Civil War Museum in Richmond, VA, on May 4. He was one of eight Emerging Scholars invited to speak. The museum described the event and program as follows:
On Saturday, May 4, 2019, the American Civil War Museum celebrates the grand opening of its new museum building and exhibits. As part of that program, the ACWM will highlight some of the most interesting work of the next generation of writers, communicators, and thinkers of Civil War era history/public history with a series lightning talks by emerging professionals in their field. Over the winter, ACWM staff reviewed many applications and selected eight individuals in the early phases of their careers who represented a blend of compelling scholarship and communication skills.
You can read more about the grand opening of the museum here.
Dr. Luther joined Prof. Aaron Brantly (VT Political Science), Prof. Chad Levinson (VT Government and International Affairs), and moderator Ms. Christine Callsen (VT Hume Center) on a panel titled, “Social Computing and Its Impact on Intelligence,” at the Emerging Trends: New Tools, Threats and Thinking symposium. The event was sponsored by the National Capital Region Intelligence Studies Consortium (ISC) and held at Marymount University on April 25.
Investigators in domains such as journalism, intelligence analysis, and human rights advocacy frequently analyze photographs of questionable or unknown provenance. These photos can provide invaluable leads and evidence, but even experts must invest significant time in each analysis, with no guarantee of success. Crowdsourcing, with its affordances for scalability and parallelization, has great potential to augment expert performance, but little is known about how crowds might fit into photo analysts’ complex workflows. In this talk, I present my group’s research with two communities: open-source investigators who geolocate and verify social media photos, and antiquarians who identify unknown persons in 19th-century portrait photography. Informed by qualitative studies of current practice, we developed a novel approach, expert-led crowdsourcing, that combines the complementary strengths of experts and crowds to solve photo mysteries. We built two software tools based on this approach, GroundTruth and Photo Sleuth, and evaluated them with real experts. I conclude by discussing some broader takeaways for crowdsourced investigations, sensemaking, and image analysis.
For the past seven years, Virginia Tech’s Institute for Creativity, Arts, and Technology (ICAT) has been pushing the envelope of creative exploration. Through partnerships with all the colleges at Virginia Tech, ICAT has assembled teams of scientists, engineers, artists, and designers to tackle some of the most complex innovation challenges that drive economic development. Join us to hear about the Creativity and Innovation District at Virginia Tech, ICAT’s role within it and the critical importance of human-centered design.
Dr. Luther gave an invited presentation, titled “Civil War Photo Sleuthing: Past, Present, and Future” at Civil War Photo Talks in Arlington, VA, co-sponsored by Military Images Magazine and Civil War Faces. Other invited speakers included Ann Shumard, National Portrait Gallery; Micah Messenheimer, Library of Congress; Bryan Cheeseboro, National Archives; and Rick Brown, Military Images. The abstract for Dr. Luther’s talk was as follows:
People have struggled to identify unknown soldiers and sailors in Civil War photos since even before the war ended. In this talk, I trace the 150-year history of photo sleuthing, showing how the passage of time has magnified some challenges, but also unlocked exciting new possibilities. I show how technologies like social media, face recognition, and digital archives allow us to solve photo mysteries that have eluded families and researchers for a century and a half.
Investigators in domains such as journalism, military intelligence, and human rights advocacy frequently analyze photographs of questionable or unknown provenance. These photos can provide invaluable leads and evidence, but even experts must invest significant time in each analysis, with no guarantee of success. Crowdsourcing, with its affordances for scalability and parallelization, has great potential to augment expert performance, but little is known about how crowds might fit into photo analysts’ complex workflows. In this talk, I present my group’s research with two communities: open-source investigators who geolocate and verify social media photos, and antiquarians who identify unknown persons in 19th-century portrait photography. Informed by qualitative studies of current practice, we developed a novel approach, expert-led crowdsourcing, that combines the complementary strengths of experts and crowds to solve photo mysteries. We built two software tools based on this approach, GroundTruth and Photo Sleuth, and evaluated them with real experts. I conclude by discussing some broader takeaways for crowdsourced investigations, sensemaking, and image analysis.
Dr. Luther was selected as one of eight Emerging Scholars by the American Civil War Museum in Richmond, VA. He will give an invited presentation on Civil War Photo Sleuth to audiences at the grand opening of the newly expanded museum on May 4. The goal of the program is to “highlight some of the most interesting work of the next generation of writers, communicators, and thinkers of Civil War era history/public history.”
Nai-Ching Wang, a Ph.D. student advised by Dr. Luther, successfully defended his dissertation today. His dissertation is titled, “Supporting Historical Research and Education with Crowdsourced Analysis of Primary Sources”, and his committee members were Dr. Luther (chair), Ed Fox, Gang Wang, and Paul Quigley, with Matt Lease (UT Austin School of Information) as the external member. Here is the abstract for his dissertation:
Historians, like many types of scholars, are often researchers and educators, and both roles involve significant interaction with primary sources. Primary sources are not only direct evidence for historical arguments but also important materials for teaching historical thinking skills to students in classrooms, and engaging the broader public. However, finding high quality primary sources that are relevant to a historian’s specialized topics of interest remains a significant challenge. Automated approaches to text analysis struggle to provide relevant results for these “long tail” searches with long semantic distances from the source material. Consequently, historians are often frustrated at spending so much time on manually the relevance of the contents of these archives other than writing and analysis. To overcome these challenges, my dissertation explores the use of crowdsourcing to support historians in analysis of primary sources. In four studies, I first proposed a class-sourcing model where historians outsource historical analysis to students as a teaching method and students learn historical thinking and gain authentic research experience while doing these analysis tasks. Incite, a realization of this model, deployed in 15 classrooms with positive feedback. Second, I expanded the class-sourcing model to a broader audience, novice (paid) crowds and developed the Read-agree-predict (RAP) technique to accurately evaluate relevance between primary sources and research topics. Third, I presented a set of design principles for crowdsourcing complex historical documents via the American Soldier project on Zooniverse. Finally, I developed CrowdSCIM to help crowds learn historical thinking and evaluated the tradeoffs between quality, learning and efficiency. The outcomes of the studies provide systems, techniques and design guidelines to 1) support historians in their research and teaching practices, 2) help crowd workers learn historical thinking and 3) suggest implications for the design of future crowdsourcing systems.
Our research investigating the use of crowd workers to analyze satellite imagery of tree canopy coverage was accepted as a poster for the American Geophysical Union (AGU 2018) fall meeting in Washington, DC. The lead author is Forestry Ph.D. student Jill Derwin, with co-authors Valerie Thomas, Randolph Wynne, S. Seth Peery, John Coulston, Dr. Luther, Greg Liknes, and Stacie Bender. The abstract for the poster, titled “Validating the 2011 and 2016 NLCD Tree Canopy Cover Products using Crowdsourced Interpretations“, is as follows:
The 2011 and 2016 National Land Cover Database (NLCD) Tree Canopy Cover (TCC) products utilize training data collected by experienced photo interpreters.. Observations of tree canopy cover were collected using 1-meter NAIP imagery overlaid on a dot grid. At each point in the dot grid, experts interpreted whether the point fell on canopy or not. The proportion of positive observations yields percent canopy cover. These data are used in conjunction with a set of 30-m resolution predictors (primarily Landsat imagery) to train a random forest model predicting TCC nationwide. We will test the use of crowdsourced observations of canopy cover to validate national products. Crowd-workers will apply the same training data photo interpretation methodology at plot locations across the United States subsampled from the public Forest Inventory and Analysis database . Each plot will have repeated samples, with multiple crowd observers interpreting each location. Using a multi-scale bootstrap-aggregation or ‘bagging’ approach at the plot- and dot-levels, we randomly select sets of interpretations from randomly chosen interpreters to train consecutive models. This bagging methodology is applied at both the plot level as well as the individual dot observations to test the within-plot crowd-sourced interpretation variance. We will compare the NLCD TCC models from 2011 and 2016 to multiple bagged samples and aggregated quality metrics such as the coefficient of determination and root mean square error to evaluate model quality. We will also compare these bagged samples to independent expert interpretations in order to gain insight into the quality of crowd interpretations themselves. This work provides insight into the utility of crowdsourced observations as validation of national tree canopy cover products. In addition to comparing aggregated crowd interpretations to expert measurements, identifying conditions that result in disagreement in interpreters’ observations may help to inform the methodology and to improve interpreter-training for the crowdsourcing task.
Investigators have enlisted the help of the public since the days of the first “wanted” posters, but in an era where extensive personal information, as well as powerful search tools, are widely available online, the public is increasingly taking matters into its own hands. Some of these crowdsourced investigations have solved crimes and located missing persons, while others have leveled false accusations or devolved into witch hunts. In this talk, Luther describes his lab’s recent efforts to develop software platforms that support effective, ethical crowdsourced investigations in domains such as history, journalism, and national security.