Browse 4 peer-reviewed papers from Bocconi University spanning Social Media, User Engagement (2022–2025). Research powered by Prolific's high-quality participant data.
This page lists 4 peer-reviewed papers from researchers at Bocconi University in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: G Beknazar-Yuzbashev, R Jiménez-Durán, J McCrosky
Year: 2025
Published in: 2025 - econstor.eu
Institution: Mozilla Foundation, Columbia University, Bocconi University, Stanford University, University of Warwick
Research Area: Social Media, User Engagement, Toxicity
Discipline: Social Science
Reducing exposure to toxic content on social media lowers user engagement but also decreases the toxicity of user-generated content, highlighting a trade-off for platforms between reduced toxicity and increased engagement.
Methods: Pre-registered browser extension field experiment on Facebook, Twitter, and YouTube to randomly hide toxic content for six weeks; supplemented with a survey experiment.
Key Findings: Impact of reduced exposure to toxic content on advertising impressions, time spent, engagement, and user-generated content toxicity; explored curiosity and alignment between engagement and welfare.
Citations: 76
-
Authors: HR Kirk, M Bartolo, A Whitefield, P Rottger
Year: 2024
Published in: Advances in ..., 2024 - proceedings.neurips.cc
Institution: Meta, Cohere, AWS AI Labs, Contextual AI, Factored AI, University of Oxford, Bocconi University, Meedan, Hugging Face, University College London, ML Commons, University of Pennsylvania
Research Area: LLM Alignment, Human Feedback, Multicultural Studies
Discipline: Artificial Intelligence, Computational Social Science
The PRISM Alignment Dataset presents a large-scale, culturally diverse human feedback dataset linking sociodemographic profiles of 1,500 participants from 75 countries to their contextual preferences and fine‑grained ratings in 8,011 live conversations with 21 LLMs. This enables analysis of how subjective values vary across people and cultures in LLM alignment data.
DOI: https://doi.org/10.52202/079017-3342
Citations: 204
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: L Bursztyn, A Imas, R Jiménez-Durán, A Leonard
Year: 2022
Published in: 2025 - nber.org
Institution: University of Chicago, National Bureau of Economic Research, Booth School of Business, Bocconi University, Stanford University
Research Area: Behavioral Economics, Social Dynamics of Technology Adoption
Discipline: Behavioral Economics
Citations: 2