Crowdsourcing platforms like Amazon Mechanical Turk have created the opportunity to democratize science by making diverse research participants easily available to researchers. I have published several research studies and reviews that may useful for researchers interested in using crowdsourcing platforms to conduct research. 

Review papers  on using Mechanical Turk in the behavioral sciences:

Chandler, J., & Shapiro, D. (2016). Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology, 12, 53-81. 

Paolacci, G., Chandler, J., & Ipeirotis, P. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5, 411-419. 

Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant poolCurrent Directions in Psychological Science, 23, 184-188. 

Stewart, N., Chandler, J., & Paolacci, G. (2017). Crowdsourcing Samples in Cognitive ScienceTrends in Cognitive Sciences, 21(10), 736-748.  

Composition of Mechanical Turk samples: 

Casey, L. S., Chandler, J., Levine, A. S., Proctor, A., & Strolovitch, D. Z. (2017). Intertemporal Differences Among MTurk Workers: Time-Based Sample Variations and Implications for Online Data CollectionSAGE Open7(2), 2158244017712774.

Shapiro, D. N., Chandler, J., & Mueller, P. A. (2013). Using Mechanical Turk to study clinical populationsClinical Psychological Science, 1, 213,220.

Stewart, N., Ungemach, C., Harris, A. J., Bartels, D. M., Newell, B. R., Paolacci, G., & Chandler, J. (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workersJudgment and Decision making10(5), 479.

Fraudulent responses on Mechanical Turk: 

Chandler, J., & Paolacci, G. (in press) Lie for a Dime: When most prescreening responses are honest but most study participants are imposters. Social Psychological and Personalty Science

Non-naive participants: 

Stewart, N, Ungemach, C., Harris, A.J.L., Bartels, D.M., Newell, B.R., Paolacci, G., & Chandler, J (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision Making.

Chandler, J., Paolacci, G., Peer, E., Mueller, P., & Ratliff, K. (2015) Using nonnaive participants can reduce effect sizesPsychological Science, 26, 1131-1139.

Chandler, J., Mueller, P. A., & Paolacci, G. (2013). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchersBehavior Research Methods, 46, 112-130. 

Finally, research and tutorials on best practices in crowdsourcing recruitment methods can be found in the following publications and white papers: 

Chandler, J., Paolacci, G., & Mueller, P. A. (2014). Risks and rewards of crowdsourcing marketplaces. In Michelucci, P. (Ed.)  Handbook of Human Computation (pp 377-392). New York, NY: Sage. 

Pe'er, E., Paolacci, G., Chandler, J., & Mueller, P. A. (2012). Selectively recruiting participants from Amazon Mechanical Turk using Qualtrics. (note - Amazon has since added features that make it far easier to do this within their own platform). 

Mueller, P. A., & Chandler, J. (2012). Emailing Amazon Mechanical Turk workers using Python