Advertisement

Crowdsourcing: A Critical Reflection on This New Frontier of Participant Recruiting in Nutrition and Dietetics Research

Published:January 22, 2020DOI:https://doi.org/10.1016/j.jand.2019.10.018

      Keywords

      In this issue, Khandpur and colleagues
      • Khandpur N.
      • Rimm E.R.
      • Moran A.J.
      The influence of the new US Nutrition Facts label on consumer perceptions and understanding of added sugars: a randomized controlled experiment.
      report a study assessing the effects of the new Nutrition Facts label (NFL) compared with the current NFL on consumer purchase intentions and understanding of added sugars. A randomized controlled experiment was designed to compare a control condition with one of two label conditions. The authors reported differences in scores between the two label formats measured using a 5-point Likert scale as well as knowledge score using multiple-choice questions. What makes this study particularly intriguing is the randomized study design and the efficient use of crowdsourcing to recruit, screen, and manage over 1,000 online participants (from an initial response of over 4,000) while meeting specific inclusion criteria in a relatively short period of time.
      Crowdsourcing is the process of collecting information from an established network of people in an online platform.
      • Crequit P.
      • Mansouri G.
      • Benchoufi M.
      • Vivot A.
      • Ravaud P.
      Mapping of crowdsourcing in health: systematic review.
      Although there are now multiple crowdsource providers, Amazon Mechanical Turk (MTurk) is considered at this time to be the main Web platform. Briefly, MTurk is a marketplace where people (“workers”) complete paid tasks for various other people and organizations (“requestors”). Tasks are known as HITs, or Human Intelligence Tasks, and are advertised to workers with a brief description of the task, the maximum time allotted, specific qualifications requested (eg, located in the United States, history of successfully completed HITs), and the monetary reward associated with completing the HIT. Once the requestor confirms the HIT was completed successfully, the worker receives payment. The requestor can also specify how many people they want to complete the HIT and can choose to reject HITs from people who attempt to complete the HIT multiple times or fail to provide proof of successful completion. The reward for most HITs is usually less than $1.00, which has been a criticism of crowdsourcing platforms like MTurk.
      • Semuels A.
      The internet is enabling a new kind of poorly paid hell. The Atlantic.
      MTurk has had broad use since 2012 in the social science and organizational management literature. Several comprehensive reviews have been published on its use in organizational behavior and health venues.
      • Crequit P.
      • Mansouri G.
      • Benchoufi M.
      • Vivot A.
      • Ravaud P.
      Mapping of crowdsourcing in health: systematic review.
      ,
      • Keith M.G.
      • Tay L.
      • Harms P.D.
      Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations.
      A single social science experiment conducted with MTurk in May 2016 resulted in 23,000 people completing 230,000 tasks comprising 3.3 million minutes in 30 days.
      • Bohannon J.
      Psychology. Mechanical Turk upends social sciences.
      MTurk has some additional facets that potentially facilitate use for research questions. Each worker is assigned a unique and unidentifiable ID for each job, which can prevent duplicate enrollment. The data are collected on a separate server from the Amazon portal, which allows the data to only be accessed by the researcher. Workers can be prescreened based on qualification ratings of their prior completion of jobs. Payment is made by MTurk electronically to worker accounts plus a percentage billing fee to the requestor. The amount of payment is aligned with the estimated time to complete similar surveys advertised on MTurk (typically US $0.50 to $1.50). The posting of the job can be controlled by day, time, and frequency until the enrollment goal has been met.
      • Cunningham J.A.
      • Godinho A.
      • Kushnir V.
      Using Mechanical Turk to recruit participants for internet intervention research: experience from recruitment for four trials targeting hazardous alcohol consumption.
      ,
      • Arechar A.A.
      • Kraft-Todd G.
      • Rand D.G.
      Turking overtime: how participant characteristics and behavior vary over time and day on Amazon Mechanical Turk.
      Other crowdsourcing platforms’ parameters may be different. There are extensive blog networks geared to workers that share strategies to maximize participation and pay. MTurk as well as the other platforms may change their participation rules at any time.
      The strength of using crowdsourcing lies in the ability to collect data much faster than traditional methods due to the competitive nature of recruitment. Financial cost and time investment is usually lower than researcher-led recruitment on other social platforms or by traditional methods (posters, flyers, listservs). Many researchers will choose to place their survey or study on a separate online platform and provide the study link within the HIT. This allows data to be collected on a secure server that can only be accessed by the researcher. This also allows researchers, such as Khandpur and colleagues,
      • Khandpur N.
      • Rimm E.R.
      • Moran A.J.
      The influence of the new US Nutrition Facts label on consumer perceptions and understanding of added sugars: a randomized controlled experiment.
      to assign workers to different study conditions to observe behavioral outcomes, essentially conducting a rapid randomized controlled trial with a large online study population. Thus, MTurk is advantageous when a topic is timely and relevant, when preliminary or pilot data are needed for a grant application, or when it would be substantially more efficient to conduct an online randomized controlled trial than recruit the same number of study participants through in-person venues.
      Examples of recent MTurk methodology in nutrition have begun to enter the literature. The Child Health and Nutrition Research Initiative, a transparent and systematic research priority method of global health, recently reported on setting weights of 15 of The Child Health and Nutrition Research Initiative criteria using public stakeholders recruited globally from MTurk.
      • Wazny K.
      • Ravenscroft J.
      • Chan K.Y.
      • Bassani D.G.
      • Anderson N.
      • Rudan I.
      Setting global weights for fifteen CHNRI criteria at the global and regional level using public stakeholders: an Amazon Mechanical Turk study.
      Two other published reports have centered on the behavior and perceptions of the Supplemental Nutrition Assistance Program.
      • Leung C.W.
      • Musicus A.
      • Willett W.C.
      • Rimm E.R.
      Improving nutritional impact in supplemental nutrition assistance program: perspectives from the participants.
      ,
      • Leung C.W.
      • Wolfson J.A.
      Perspectives from Supplemental Nutrition Assistance Program participants on improving SNAP policy.
      The primary limitation of using crowdsourcing is the potential for selection bias. Although there have been several studies written on the characteristics of MTurk workers, it may still be difficult to characterize the study population for each HIT. The worker pool has been identified as being potentially biased with higher education, more likely to be Asian or Caucasian, lower income, and far more technology savvy compared with US Census data.
      • Keith M.G.
      • Tay L.
      • Harms P.D.
      Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations.
      Workers self-report inclusion criteria so the researcher has no ability to validate the information. One systematic review reported 60% of published studies using MTurk crowdsourcing were missing age and gender.
      • Crequit P.
      • Mansouri G.
      • Benchoufi M.
      • Vivot A.
      • Ravaud P.
      Mapping of crowdsourcing in health: systematic review.
      Moreover, some workers may have multiple accounts to bypass screening criteria to complete the HIT and receive payment. Researchers using MTurk to generate data may want to validate their findings in other study populations recruited through more conventional means. A recent report compared MTurk with unpaid Internet resources for clinical trials. Concern was expressed that crowdsourcing recruitment may not represent true patients interested in clinical interventions.
      • Bunge E.
      • Cook H.M.
      • Bond M.
      • et al.
      Comparing Amazon Turk with unpaid internet resources in online clinical trials.
      An additional concern with online paid participation was the practice of “satisficing.” “Satisficing” is defined as the practice of providing good enough answers rather than thoughtful, carefully considered answers defined as optimizing.
      • Hamby T.
      • Taylor W.
      Survey satisficing inflates reliability and validity measures: an experimental comparison of college and Amazon Mechanical Turk samples.
      The risk of satisficing is common to all survey methodologies. Researchers can mitigate satisficing risk by implementing survey design and response factors. A crowdsourcing environment may be unique in promoting rapid movement through a “job” to achieve completion so the worker can go on to accept another job for pay. A recent report by Hamby and Taylor compared a sample of academically focused college students earning course credit for participation with a sample of MTurk workers paid US $0.49. The authors observed that validity and reliability measures can be inflated by satisficing behavior. They further outlined a few basic design factors that might reduce this inflation including length of survey, attention checks as a discrete design element to detect satisficing behavior, and early awareness by researchers of the risk of satisficing trends.
      • Hamby T.
      • Taylor W.
      Survey satisficing inflates reliability and validity measures: an experimental comparison of college and Amazon Mechanical Turk samples.
      Online electronic surveys are not new. The use in the scientific literature has continued to expand over the last decade. Recruiting online participants has substantial financial incentive to reduce costs compared with pencil-and-paper or face-to-face scenarios. What is new is the ability to rapidly “find” thousands of people to participate from all over the world for a relatively small fee (US $1.50 for completing the 20-minute NFL survey in the Khandpur study). Because the use of this technology began in market research, it is important to reflect on how this new technology interfaces with scientific inquiry compared with market research.
      Market research is the process of gathering and analyzing information to help businesses, governments, and researchers identify attitudes toward products, services, people, and so on. It helps reduce risk in decision making using forecasting data of consumer buying patterns and interest. An example of market research in nutrition and dietetics is the Academy of Nutrition and Dietetics’s Compensation and Benefits Survey. Data are collected online from volunteer Academy members to produce a nationwide survey that “details the compensation of dozens of core dietitian and dietetic technician jobs, broken down by region, education, experience, supervisory responsibility, and much more.”
      • Rogers D.
      Compensation and benefits survey 2017.
      More often, market research is considered primarily subjective opinion based. Results are disseminated within an industry journal, newsletter, or internally within a business setting. Market research is different than the scientific method, which requires the ability to replicate a study to gather additional evidence or confirmation of findings. Rather, the results of market research are used to drive predictive decision making. Outcomes may or may not be quantitatively measured. An example would be a report in Food Business News on collagen and mushroom ingredient trends in functional beverages. The article uses market research from The NPD Group of Chicago citing “almost a quarter of U.S. adults are trying to manage a health or medical condition through their diet” and includes data from consumer polling.
      • Berry D.
      Collagen, mushrooms trending in function beverages. Food Business News.
      Nutrition labeling could be a topic of a market research consumer opinion poll. An example might be asking consumers which label they think is more engaging or looks better or would catch their eye at the supermarket. Khaudpur et al,
      • Khandpur N.
      • Rimm E.R.
      • Moran A.J.
      The influence of the new US Nutrition Facts label on consumer perceptions and understanding of added sugars: a randomized controlled experiment.
      however, posed a similar question with scientific inquiry using a randomized study design.
      There are several important differences between market research and scientific inquiry. Scientific inquiry is defined as the systematic formulation of a series of steps to test a hypothesis or analyze data under controlled experimental conditions. These methodical steps include: (1) stating research objectives or hypotheses a priori, (2) obtaining Institutional Review Board approval for human or animal participation along with participant consent, (3) completing validation and reliability numerics of survey content, (4) analysis of data using standardized statistical methodology (quantitative) or thematic analysis (qualitative), and (5) peer review of results prior to formal publication.
      Although crowdsourcing has traditionally been used in market research, Khandpur and colleagues demonstrate that it can also be effectively used in nutrition-related research when specific design and reporting attributes are incorporated. The Figure provides some recommended optimal guidelines for use and reporting of crowdsourcing platforms when collecting data for scientific inquiry research. The overall goal is to share detail for another researcher to be able to replicate one’s work and add to the literature in a new and meaningful way.
      FigureOptimal information to report in a scientific inquiry manuscript when using crowdsourcing as a participant source.
      Information componentSuggested optimal reporting content
      Research questionState research question; provide IRB
      IRB=Institutional Review Board.
      approval
      Survey parametersCopy of survey provided (e-appendix); information on survey development or origin with validation and reliability statistics; software platform of survey delivery
      How participants (workers) were recruitedSubject line that announced job opportunity to potential workers
      “Job” or HIT
      HIT=human intelligence task.
      to be completed
      Actual survey, questionnaire, job parameters provided in appendix or as supplementary table or as flowchart (how worker would move through job steps); include reading level
      Participant (worker) criteriaDetailed inclusion, exclusion criteria; any screening process employed as additional steps
      “Job” or HIT descriptionExact wording of job or HIT as published on platform to workers
      Crowdsource platformVendor location, website
      Timeline for worker recruitmentPosting of job or HIT: time of day offered, how long job available to worker acceptance
      Timeline for worker acceptanceTime of recruitment met; pattern of enrollment
      Participant or worker characteristicsStatistics on number of acceptances, survey completers and noncompleters; consider flowchart
      Compensation offeredLevel, how determined
      Cost of investigator out-of-pocket feesFee paid for crowdsource platform use
      Participant or worker demographicsDetailed information in tabular form; comparison reference population with P values to show compatibility of sample (bias identified, if present)
      Limitation and strengthsProvided in discussion; details to improve replication by others
      a IRB=Institutional Review Board.
      b HIT=human intelligence task.
      In summary, evidence-based research is the backbone of progress in science, particularly in health sciences, in which the field of nutrition and dietetics plays an essential role. Researchers in nutrition and dietetics should be aware and embrace new methodological trends in nutrition and dietetics research while maintaining scientific rigor of the process. Crowdsourcing is such a trend and technology advances provide new venues for participant recruitment. Reporting extensive details of the methodology and analysis enables other researchers to replicate the study, to either confirm or refute prior evidence. The aim of every nutrition professional should be to conduct and mentor research activities that promote the highest level of practice.

      References

        • Khandpur N.
        • Rimm E.R.
        • Moran A.J.
        The influence of the new US Nutrition Facts label on consumer perceptions and understanding of added sugars: a randomized controlled experiment.
        J Acad Nutr Diet. 2020; 120: 197-209
        • Crequit P.
        • Mansouri G.
        • Benchoufi M.
        • Vivot A.
        • Ravaud P.
        Mapping of crowdsourcing in health: systematic review.
        J Med Internet Res. 2018; 20: e187
        • Semuels A.
        The internet is enabling a new kind of poorly paid hell. The Atlantic.
        (Published January 23, 2018. Accessed September 10, 2019)
        • Keith M.G.
        • Tay L.
        • Harms P.D.
        Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations.
        Front Psychol. 2017; 8 (Article 1359)
        • Bohannon J.
        Psychology. Mechanical Turk upends social sciences.
        Science. 2016; 352: 1263-1264
        • Cunningham J.A.
        • Godinho A.
        • Kushnir V.
        Using Mechanical Turk to recruit participants for internet intervention research: experience from recruitment for four trials targeting hazardous alcohol consumption.
        BMC Med Res Methodol. 2017; 17: 156
        • Arechar A.A.
        • Kraft-Todd G.
        • Rand D.G.
        Turking overtime: how participant characteristics and behavior vary over time and day on Amazon Mechanical Turk.
        J Econ Sci Assoc. 2017; 3: 1-11
        • Wazny K.
        • Ravenscroft J.
        • Chan K.Y.
        • Bassani D.G.
        • Anderson N.
        • Rudan I.
        Setting global weights for fifteen CHNRI criteria at the global and regional level using public stakeholders: an Amazon Mechanical Turk study.
        J Glob Health. 2019; 9: 010702
        • Leung C.W.
        • Musicus A.
        • Willett W.C.
        • Rimm E.R.
        Improving nutritional impact in supplemental nutrition assistance program: perspectives from the participants.
        Am J Prev Med. 2017; 52: S193-S198
        • Leung C.W.
        • Wolfson J.A.
        Perspectives from Supplemental Nutrition Assistance Program participants on improving SNAP policy.
        Health Equity. 2019; 3: 81-85
        • Bunge E.
        • Cook H.M.
        • Bond M.
        • et al.
        Comparing Amazon Turk with unpaid internet resources in online clinical trials.
        Internet Interv. 2018; 12: 68-73
        • Hamby T.
        • Taylor W.
        Survey satisficing inflates reliability and validity measures: an experimental comparison of college and Amazon Mechanical Turk samples.
        Educ Psy Measurement. 2016; 76: 912-983
        • Rogers D.
        Compensation and benefits survey 2017.
        J Acad Nutr Diet. 2018; 118: 499-511
        • Berry D.
        Collagen, mushrooms trending in function beverages. Food Business News.
        (Accessed November 29, 2019)
      1. VanHorn L. Beto J. Research: Successful Approaches in Nutrition and Dietetics. 4th Ed. Academy of Nutrition and Dietetics, Chicago, IL2019

      Biography

      J. A. Beto is professor emeritus, Nutrition Sciences, Dominican University, River Forest, IL.
      E. Metallinos-Katsaras is the Ruby Winslow Linn professor and chair, Department of Nutrition in the College of Natural, Behavioral and Health Science, Simmons University, Boston, MA.
      C. Leung is an assistant professor, Department of Nutritional Sciences, University of Michigan School of Public Health, Ann Arbor, MI.