Crowdsourcing can be defined as “The practice of obtaining information or input into a task or project by enlisting the services of a large number of people, either paid or unpaid, typically via the Internet” (Oxford University Press 2020). Over the past 15 years it has become an important part of volunteering in public collections. Crowdsourcing is seen both as a means to encourage public participation in science and to collect valuable scientific data. The projects hosted on Zooniverse have resulted in nearly 300 published papers (“Zooniverse Publications,” 2020) from over 500 million classifications by over 2 million participants (“Zooniverse Homepage,” 2020). The mainly biodiversity oriented platforms Digivol and DoeDat have completed over 2 million and 200.000 tasks with over 7000 and 800 volunteers respectively (“Digivol Homepage,” 2020; “DoeDat Homepage,” 2020).
Crowdsourcing projects have been used in a wide range of scientific disciplines, including astronomy (Chen et al., 2016; Dieleman, Willett, & Dambre, 2015), particle physics (Jennett et al., 2016), ecology (Fink et al., 2014; Pocock, Tweddle, Savage, Robinson, & Roy, 2017) and neurology (Greene, Kim, & Seung, 2016; Zeng & Sanes, 2017). Crowdsourcing has also been used in the digital humanities (Terras, 2015) and in geography (Goodchild, 2007; Haklay, 2013; Kerski, 2015). There are some good examples of where crowdsourcing has enabled science that would otherwise have been difficult by any other means (Ellwood et al., 2015; Hill et al., 2012; McKinley et al., 2017).
The vast majority of crowdsourcing projects have used static two-dimensional images as their basis. While the use of these 2D images has been hugely successful, there are use cases for crowdsourcing with more dynamic content, such as 3D images and video. Here we evaluate this possibility, with Meise Botanic Garden’s crowdsourcing project DoeDat as a test case.