Web survey platforms and the Internet more generally have democratized science by making diverse research participants easily available to researchers. I’m interested in identifying best practices in online data collection and helping other researchers make the best possible use of online tools. I am especially interested in the use of online labor markets (like Amazon Mechanical Turk) as a source of research participants and best practices to maximize data quality.
Review papers and chapters on using Mechanical Turk in the behavioral sciences. My colleages and I have written several overviews of Mechanical Turk as a source of research participants:
Stewart, N., J. Chandler, and G. Paolacci. “Crowdsourcing samples in cognitive science.” Trends in Cognitive Science, vol. 21, 2017, pp. 736-748.
Chandler, J., and D. Shapiro. “Conducting clinical research using crowdsourced convenience samples.” Annual Review of Clinical Psychology, vol. 12, 2016, pp. 53-81.
Paolacci, G., and J. Chandler. “Inside the Turk: Understanding Mechanical Turk as a participant pool.” Current Directions in Psychological Science, vol. 23, 2014, pp. 184-188.
Paolacci, G., J. Chandler, and P. Ipierotis. “Running experiments on Amazon Mechanical Turk.” Judgment and Decision Making, vol. 5, 2010, pp. 411-419.
Representativeness of online samples. Web surveys make it possible to cost-effectively reach members of relatively rare groups. But this can lead to incorrect conclusions when online group members differ from offline group members. Before collecting data online, its important to understand what these difference might be and if online samples are fit for the study objectives:
Chandler, J. “Surveying Vocational Rehabilitation Applicants Online: A Feasibility Study.” Journal of Disability Policy Studies, April 8, 2019. doi:10.1177/1044207319835188.
Chandler, J., C. Rosenzweig, A.J. Moss, J. Robinson, and L. Litman. “Online Panels in Social Science Research: Expanding Sampling Methods Beyond Mechanical Turk.” Behavior Research Methods, 2019.
Shapiro, D.N., J. Chandler, and P. Mueller. “Using Mechanical Turk to study clinical populations.” Clinical Psychological Science, vol. 1, no. 2, 2013, pp. 213-220.
Best practices in online data collection. The rapid growth of online panels as a data source and the anonymity of the respondents recruited from these sources has raised concerns about the quality of the data they produce. Some panel members may be seasoned “professional research participants” who try to maximize payments from online surveys by lying to gain access to them and completing them to quickly. Others may be careful, but answer questions differently by virtue of their experience. My colleages and I examine the prevalence and impact of these respondents in several papers:
Chandler, J., I. Sisso, and D. Shapiro. “The Impact of Carelessness and Fraud on the Study of Rare Clinical Groups Online.” Journal of Abnormal Psychology, vol. 129, 2020, pp. 49-55.
Hauser, D.J., G. Paolacci, and J. Chandler. “Common concerns with MTurk as a participant pool: Evidence and solutions.” In Handbook of Research Methods in Consumer Psychology, edited by F.R. Kardes, P.M. Herr, and N. Schwarz. New York: Routledge, 2019.
Casey, L.S., J. Chandler, A.S. Levine, A. Proctor, and D.Z. Strolovitch, “Intertemporal Differences Among MTurk Workers: Time-Based Sample Variations and Implications for Online Data Collection.” SAGE Open, vol. 7, 2017.
Chandler, J., and G. Paolacci. “Lie for a Dime: When most prescreening responses are honest but most study participants are imposters.” Social Psychological and Personality Science, vol. 8, 2017, pp. 500-508.
Stewart, N., C. Ungemach, A.J.L. Harris, D.M. Bartels, B.R. Newell, G. Paolacci, and J. Chandler. “The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers.” Judgment and Decision Making, vol. 10, 2015, pp. 479-491.
Chandler, J., G. Paolacci, E. Pe’er, P. Mueller, and K. Ratliff. “Using nonnaive participants can reduce effect sizes.” Psychological Science, vol. 26, 2015, pp. 1131-1139.
Stewart, N., Ungemach, C., Harris, A. J., Bartels, D. M., Newell, B. R., Paolacci, G., & Chandler, J. (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision making, 10(5), 479.
Chandler, J., P. Mueller, and G. Paolacci. “Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers.” Behavioral Research Methods, vol. 46, 2014, pp. 112‑130.
Finally, research and tutorials on best practices in crowdsourcing recruitment methods can be found in the following publications and white papers:
Chandler, J., G. Paolacci, and P. Mueller. “Risks and rewards of crowdsourcing marketplaces.” In Handbook of Human Computation, edited by P. Michelucci. New York: Springer, 2014.
Pe'er, E., G. Paolacci, J. Chandler, J., & P. A. Mueller “Selectively recruiting participants from Amazon Mechanical Turk using Qualtrics.” (2012; note - Amazon has since added features that make it far easier to do this within their own platform).
Mueller, P. A., & J. Chandler “Emailing Amazon Mechanical Turk workers using Python.” 2012