Browse 17 peer-reviewed papers in Survey. Discover studies powered by high-quality human data from Prolific.
This page lists 17 peer-reviewed papers classified as Survey in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: S Zhang, J Xu, AJ Alvero
Year: 2025
Published in: Sociological Methods & Research, 2025 - journals.sagepub.com
Institution: University of Maryland, Indiana University, University of Minnesota Duluth
Research Area: Sociological Methods, Generative AI, Survey Methodology
Discipline: Sociology, Social Science
The study finds that 34% of research participants use generative AI tools like large language models (LLMs) to assist with open-ended survey responses, leading to more homogeneity and positivity in their answers, which could impact data validity by masking social variations.
Methods: The study conducted an original survey on a popular online platform and simulated comparisons between human-written responses from pre-ChatGPT studies and LLM-generated responses.
Key Findings: Use of LLMs by survey participants, differences in text homogeneity, positivity, and masking of social variation in open-ended survey responses.
Citations: 26
-
Authors: T Mendel, N Singh, DM Mann, B Wiesenfeld
Year: 2025
Published in: Journal of medical ..., 2025 - jmir.org
Institution: The City University of New York, George Washington University, New York University
Research Area: LLMs in Digital Health, Health Queries, User Attitudes
Discipline: Digital Health
Laypeople primarily use search engines over large language models (LLMs) for health queries, perceiving LLMs as less useful but less biased and more human-like while exhibiting no significant difference in trust or ease of use.
Methods: A screening survey followed by logistic regression analysis and a follow-up survey; comparisons were performed using ANOVA, Tukey post hoc tests, and paired-sample Wilcoxon tests.
Key Findings: Demographics and behaviors of LLM and search engine users for health queries, perceived usefulness, ease of use, trustworthiness, bias, and anthropomorphism.
Citations: 21
Sample Size: 2002
-
Authors: M Alizadeh, E Hoes, F Gilardi
Year: 2025
Published in: Scientific Reports, 2023 - nature.com
Institution: Department of Marketing, University of Amsterdam, Department of Social Sciences, Università Degli Studi di Milano, Department of Political Science and International Relations, Università Degli Studi di Milano
Research Area: Social media, Misinformation, Computational Social Science
Discipline: Computational Social Science
Token-based incentives for social media engagement increase the sharing of misinformation, but implementing penalties for objectionable content can reduce this trend without fully eliminating it.
Methods: Survey experiment analyzing the impact of hypothetical token rewards and penalties on user willingness to share different types of news content.
Key Findings: Effect of token-based incentives and penalties on user engagement and the willingness to share misinformation.
DOI: https://doi.org/10.1038/s41598-023-40716-2
Citations: 20
-
Authors: SMC Loureiro, L Hollebeek, RA Rather
Year: 2025
Published in: Journal of Marketing ..., 2025 - Taylor & Francis
Institution: Universitário de Lisboa
Research Area: Marketing Communications, Social Media, Behavioral Science
Discipline: Marketing, Behavioral Science
Personalized advertising on social media enhances brand engagement and alleviates privacy concerns, with privacy concerns having no significant effect on consumer-brand engagement.
Methods: Grounded in social exchange theory, the study utilized a quantitative survey to assess relationships between personalized advertising, information control, privacy concerns, advertising avoidance, and brand engagement.
Key Findings: The interplay between personalized advertising, consumer brand engagement, privacy concerns, information control, and advertising avoidance.
Citations: 17
Sample Size: 429
-
Authors: M Chung
Year: 2025
Published in: Internet Research, 2023 - emerald.com
Institution: University of Washington, Emory University
Research Area: Algorithmic Knowledge, Misinformation Countermeasures, Comparative Media Studies, Information Science
Discipline: Information Science
The study examines how algorithmic knowledge influences attitudes and actions against misinformation, revealing that perceptions of media influence on self and others predict corrective actions and support for regulation differently across four countries.
Methods: Four national surveys were conducted in the USA, UK, South Korea, and Mexico, with data analyzed through multigroup structural equation modeling (SEM).
Key Findings: Algorithmic knowledge, perceived influence of misinformation on self and others, intention to correct misinformation, support for regulation and content moderation.
DOI: https://doi.org/10.1108/INTR-07-2022-0578
Citations: 14
Sample Size: 5432
-
Authors: J Yang, M Jiang, T Kim
Year: 2025
Published in: Journal of Promotion Management, 2023 - Taylor & Francis
Institution: Jing Yang is affiliated with Shanghai University of Finance and Economics, Mengtian Jiang is affiliated with the University of Kentucky, and Taewan Kim is affiliated with Bowling Green State University.
Shanghai University of Finance and Economics, University of Kentucky, Bowling Green State University
Research Area: Marketing, Consumer Behavior, Advertising, Brand Management
Discipline: Marketing
Authentic COVID-19 advertisements were found to enhance consumers' perceptions of brand warmth, which consequently improved brand attitudes and engagement intentions through positively valenced emotional responses.
Methods: An online survey was used to gather consumer evaluations of COVID-19 video advertisements published between March and August 2020, with serial mediation analysis applied to understand emotional and perceptional mechanisms.
Key Findings: The relationship between perceived message authenticity in COVID-19 ads, brand warmth, emotional responses, brand attitudes, and engagement intentions.
DOI: https://www.tandfonline.com/doi/abs/10.1080/10496491.2022.2143988#
Citations: 9
-
Authors: RG Rinderknecht, L Doan
Year: 2025
Published in: Sociological ..., 2025 - journals.sagepub.com
Institution: RAND
Research Area: Crowdsourcing Research Methods, Time Use Studies, Social Science
Discipline: Artificial Intelligence
Time use patterns of MTurk and Prolific respondents differ significantly from the general U.S. population (ATUS), including less housework and care work, more time at home and alone, even after accounting for demographic differences.
Methods: Time diaries were collected and analyzed for 136 MTurk and 156 Prolific respondents, then compared with 468 ATUS responses.
Key Findings: Daily time use patterns including work, housework, travel, leisure, and time spent alone or at home.
Citations: 6
Sample Size: 760
-
Authors: C Chen, Z Cui
Year: 2025
Published in: Journal of Medical Internet Research, 2025 - jmir.org
Institution: Medical College of Wisconsin
Research Area: Trust in AI, AI-assisted diagnosis, Health communication, Healthcare human-AI interaction
Discipline: Digital Health, Human-Computer Interaction (HCI), Behavioral Science
Patients trust and are more likely to seek help from doctors explicitly avoiding AI-assisted diagnosis rather than those using extensive or moderate AI, highlighting a strong aversion to AI in healthcare settings.
Methods: A randomized, web-based 4-group survey experiment was conducted with controls for sociodemographic factors and analysis using regression, mediation, and moderation techniques.
Key Findings: Trust in and intention to seek medical help from health care professionals using AI-assisted diagnosis versus those avoiding AI, and the influence of demographic, social, and experiential factors.
DOI: https://doi.org/10.2196/66083
Citations: 4
Sample Size: 1762
-
Authors: B Katz, N Abdelgawad, D Friedberg, P Roberts, S Misra
Year: 2025
Published in: Innovation in Aging, 2025•pmc.ncbi.nlm.nih.gov
Institution: Virginia Tech
Research Area: Human–AI Interaction (HCI), Technology Perception
Discipline: Behavioral Science
Age significantly influences perceptions of generative AI tools, with older individuals perceiving more benefits and fewer risks compared to younger individuals; thinking dispositions also play a role.
Methods: A nationally representative survey of US adults conducted via the Prolific platform using various AI-relevant scales, including attitudes, risks, benefits, frequency of use, expertise, and literacy assessments.
Key Findings: Demographic factors, industry types, thinking dispositions, and attitudes toward generative AI tools, including risk and utility perceptions.
Citations: 1
Sample Size: 500
-
Authors: A Qian, R Shaw, L Dabbish, J Suh, H Shen
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Carnegie Mellon University, University of Pittsburgh, University of Utah, Yale School of Medicine, Yale University
Research Area: Responsible AI, Content Moderation, Risk Disclosure, Worker Well-being in Human-Computer Interaction (HCI).
Discipline: Computational Social Science, Human-Computer Interaction (HCI)
The paper examines how task designers approach well-being risk disclosure in Responsible AI (RAI) content work, highlighting a need for better frameworks to communicate such risks effectively.
Methods: Interviews were conducted with 23 task designers from academic and industry sectors to gather insights on risk recognition, interpretation, and communication practices.
Key Findings: How task designers recognize, interpret, and communicate well-being risks in RAI content work.
Citations: 1
Sample Size: 23
-
Authors: S Kwon, NL Kim
Year: 2025
Published in: International Textile and Apparel ..., 2025 - iastatedigitalpress.com
Institution: University of Minnesota
Research Area: Social Media Advertising, Consumer Perception, Information Collection Ethics in Marketing, Social Science.
Discipline: Social Science, Marketing
Consumers are more willing to disclose personal information in social media advertising when they perceive exchanged benefits, such as monetary rewards and personalized recommendations, outweigh the risks; the method of information collection (overt vs. covert) does not significantly affect this decision.
Methods: An online survey was conducted among U.S. Instagram users to assess attitudes toward benefit-risk trade-offs in personal data disclosure for advertising purposes.
Key Findings: Willingness to disclose personal information, click-through intentions, and purchase intentions based on perceived benefits and risks in social media advertisements.
DOI: https://doi.org/10.31274/itaa.18830
Citations: 1
Sample Size: 199
-
Authors: W van Zoonen, ME von Bonsdorff
Year: 2025
Published in: human ..., 2025 - journals.sagepub.com
Institution: Wageningen University & Research, University of Twente
Research Area: Organizational Behavior, Human Resources, or Social Science focusing on Technology and Ethics in the Workplace.
Discipline: Social Science
The study shows that algorithmic surveillance undermines trust and fairness, while increasing privacy concerns among crowdworkers, influencing their compliance, alteration, or resistance behaviors, with decontextualization intensifying these dynamics.
Methods: Three-wave survey data analysis of European online crowdworkers, analyzed through socio-technical systems theory and micro-level legitimacy frameworks.
Key Findings: The effects of algorithmic surveillance on trust, privacy concerns, fairness, and workers' compliance, alteration, or resistance, with a focus on the moderating role of perceived decontextualization.
Sample Size: 435
-
Authors: K Grosse, N Ebert
Year: 2025
Published in: ARXIV
Institution: IBM Research, ZHAW
Research Area: Security and privacy risks, LLM, human–AI interaction, AI Safety
Discipline: Computer Science
A survey of 3,270 UK adults reveals significant security and privacy risks in AI conversational agent usage, with a third engaging in risky behavior enabling attacks and many unaware of how their data are used or opting out.
Methods: Representative survey conducted via Prolific platform targeting UK adults, focusing on usage behaviors of AI conversational agents.
Key Findings: User behaviors related to security and privacy risks, data sanitization practices, attempts to jailbreak AI models, and awareness of data usage policies.
Sample Size: 3270
-
Authors: T Kaufmann, P Weng, V Bengs, E Hüllermeier
Year: 2024
Published in: 2024 - epub.ub.uni-muenchen.de
Institution: Paderborn University, German Research Center for Artificial Intelligence (DFKI), Duke Kunshan University
Research Area: Reinforcement Learning from Human Feedback (RLHF), LLM, Reward Modeling
Discipline: Artificial Intelligence
This paper surveys the fundamentals, diverse applications, and evolving impact of reinforcement learning from human feedback (RLHF), emphasizing its role in improving intelligent system alignment and performance.
Methods: The paper utilizes a survey-based approach to synthesize existing research, exploring the interactions between reinforcement learning algorithms and human input.
Key Findings: The study examines the principles, dynamics, applications, and trends in RLHF, offering insights into its role in enhancing large language models (LLMs) and intelligent systems.
Citations: 354
-
Authors: RA Stone, A Brown, F Douglas, M Green, E Hunter, M Lonnie, AM Johnstone, CA Hardman, FIO-Food Team
Year: 2024
Published in: Science Direct
Institution: Robert Gordon University, University College London, University of Aberdeen, University of Liverpool
Research Area: Food Insecurity, Public Health, Behavioral Economics (focusing on food purchasing behaviors and preparation practices related to obesity, cost of living)
Discipline: Public Health
The study examines how the UK cost of living crisis affects food purchasing and preparation behaviors in people with obesity, highlighting food insecurity and associated coping strategies, and calls for policy interventions to improve access to healthy foods.
Methods: An online survey was conducted with self-reported data on food insecurity, diet quality, cost of living impact, and food purchasing/preparation behaviors among adults with BMI ≥30 kg/m2 in England or Scotland.
Key Findings: Food insecurity, diet quality, impacts of the cost of living crisis, food purchasing behaviors, and food preparation practices among participants.
Citations: 46
Sample Size: 583
-
Authors: N Emaminejad, L Kath, R Akhavian
Year: 2024
Published in: Journal of Computing in Civil ..., 2024 - ascelibrary.org
Institution: San Diego State University
Research Area: Civil Engineering, Artificial Intelligence, Human-Robot Interaction
Discipline: Engineering
Trust in AI-powered collaborative robots (cobots) in construction is mainly influenced by safety, reliability, and transparency, while fear of job replacement can negatively impact mental health and adoption. Structural equation modeling highlights factors like error rates, data security, and communication as critical to fostering trust among AEC professionals.
Methods: Nationwide survey of AEC professionals analyzed using structural equation modeling (SEM) to assess trust determinants for AI-powered cobots.
Key Findings: Technical and psychological factors influencing trust in AI-powered cobots, including safety, reliability, error rate, data security, and communication transparency.
Citations: 22
Sample Size: 600