Published on in Vol 9 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/49881, first published .
Social Media, Public Health Research, and Vulnerability: Considerations to Advance Ethical Guidelines and Strengthen Future Research

Social Media, Public Health Research, and Vulnerability: Considerations to Advance Ethical Guidelines and Strengthen Future Research

Social Media, Public Health Research, and Vulnerability: Considerations to Advance Ethical Guidelines and Strengthen Future Research

Viewpoint

1Department of Community Health Sciences, Fielding School of Public Health, UCLA, Los Angeles, CA, United States

2College of Education and Health Professions, University of Arkansas, Fayetteville, AR, United States

3Department of Health Behavior, Texas A&M University, College Station, TX, United States

4Recovery Research Institute, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States

5College of Health Solutions, Arizona State University, Phoenix, AZ, United States

Corresponding Author:

Philip M Massey, MPH, PhD

Department of Community Health Sciences

Fielding School of Public Health

UCLA

Box 951772, Suite 16-035 CHS

650 Charles E. Young Dr. South

Los Angeles, CA, 90095

United States

Phone: 1 310 825 5308

Email: pmassey@ucla.edu


The purpose of this article is to build upon prior work in social media research and ethics by highlighting an important and as yet underdeveloped research consideration: how should we consider vulnerability when conducting public health research in the social media environment? The use of social media in public health, both platforms and their data, has advanced the field dramatically over the past 2 decades. Applied public health research in the social media space has led to more robust surveillance tools and analytic strategies, more targeted recruitment activities, and more tailored health education. Ethical guidelines when using social media for public health research must also expand alongside these increasing capabilities and uses. Privacy, consent, and confidentiality have been hallmarks for ethical frameworks both in public health and social media research. To date, public health ethics scholarship has focused largely on practical guidelines and considerations for writing and reviewing social media research protocols. Such ethical guidelines have included collecting public data, reporting anonymized or aggregate results, and obtaining informed consent virtually. Our pursuit of the question related to vulnerability and public health research in the social media environment extends this foundational work in ethical guidelines and seeks to advance research in this field and to provide a solid ethical footing on which future research can thrive.

JMIR Public Health Surveill 2023;9:e49881

doi:10.2196/49881

Keywords



In October 2021, the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security convened a hearing titled “Protecting kids online: testimony from a Facebook whistleblower” [1]. While this hearing focused on the protection of children, highlighting the amplification of content related to eating disorders targeting teenagers as well as the platform’s “blind eye” toward age verification, broader takeaways included the platform’s ability to create and cultivate a manipulative environment on social media. In part, this senate hearing, which examined questions about the inner workings of the social media ecosystem, was spurred by the 2016 election scandals [2], the spread of misinformation during the COVID-19 pandemic [3], and the mental health crisis that has most notably affected our young people [4]. While public health efforts have played a critical role in combatting misinformation on social media as well as addressing the mental health crisis, little has been done to examine the fundamental question that prompted congress’ interest in social media: can social media create a manipulative environment that makes us vulnerable to undue influence? The short answer is yes, as documented by congressional hearings [1,5-7], independent research [8-11], and investigative journalism [12-15].


The driver behind the discussion of social media manipulation appears to hinge on one key idea: algorithms. While research has examined how algorithms create an inescapable environment and thus an extensive network primed for digital discrimination, systematic bias, unethical targeting, and misinformation and disinformation campaigns [16-18], we still do not know enough about how algorithms function, what goals inform these functions, or the impact of algorithms on public health research and practice.

While algorithms and their ethical concerns have entered various dialogues, from congressional hearings to investigative journalism, little discussion has taken place in the field of public health. To date, public health ethics has focused largely on practical guidelines for writing and reviewing social media research protocols. Such ethical guidelines have included collecting public data, reporting anonymized or aggregate results, and obtaining informed consent virtually, to name a few. Susser [19] extends these considerations by discussing the role of manipulation, autonomy, and bias for digitally targeted public health interventions. Ethical considerations when using social media for public health research must expand alongside our increased understanding of how the social media environment functions, specifically concerning the presence of algorithms and how these may contribute to issues related to vulnerability. Privacy, consent, and confidentiality are important hallmarks for ethical frameworks in social media research [20], but we must move beyond these foundational questions and begin unpacking how our research and practice may or may not contribute to and benefit from the manipulative environments that many experience on social media.

The use of social media in public health has advanced the field dramatically over the last 2 decades. Traditional public health methods in surveillance and outbreak investigation [21], approaches in health education and promotion [22], and strategies in policy advocacy [23] and community organizing [24] have all been applied, refined, and adapted for the social media environment. Social media research, or the process of using social media data to conduct quantitative or qualitative research, ranging from observational data collection to experimental designs, is an invaluable tool in public health research and practice and continues to expand both in terms of how it is conducted and where it takes place [20]. Decades of applied public health research in the social media space have led to more robust surveillance tools and analytic strategies, more targeted recruitment activities, and more tailored health education [21]. However, more must be done to advance our understanding of algorithms and how they may ultimately compromise data for public health research and practice.

The potential use of compromised information in public health research is relevant to both observational and intervention research on social media. For instance, when conducting observational research that collects public data from individual accounts related to a specific topic (eg, vaccine safety), how do we disentangle the extent to which content was shared due to behind-the-scenes platform manipulation (ie, due to algorithms that place content in a user’s thread with the goal of increasing interaction and engagement and with little regard to the content itself)? Adding to this complexity is the presence of social media bots, or automated programs, that artificially amplify or spread content based on an array of goals (eg, to spread disinformation, notify of emergencies, share advertisements, or aggregate news articles) [25].

Similarly, for intervention research, do we know how interacting with social media content produced for a research study may influence the platform’s tailoring of future content for that same individual (eg, joining a vaccine-related research study may place the individual at greater risk of being exposed to future highly engaging vaccine information, which is more often than not misinformation)? Intervention research often takes place in closed or “private” groups on social media, making it easier to moderate and monitor the content directly administered by the study; however, the closed group exists within the larger ecosystem of the platform, and we do not yet know how participation in a research study may impact content exposure outside of that closed environment. Furthermore, when using social media to recruit study participants, we must also consider the potential collection and use of compromised information. For example, how much do targeted recruitment ads rely on interactions by users with content that was manipulated or artificially placed in a user’s thread to solicit interaction? These questions highlight very practical ways in which our seemingly innocuous research activities (eg, public data collection or targeted recruitment) may in fact be interacting with and relying on compromised information, thus creating a scenario where researchers are relying on information that is, to an undetermined extent, artificially manipulated by opaque algorithmic intervention, contributing to vulnerabilities that have yet to be considered by public health research taking place on social media.


We wish to move the field of public health and social media research forward by posing the following question: how should we consider vulnerability when conducting public health research in the social media environment? We pose this question not to limit or stifle public health research in the social media environment; in fact, quite the contrary—we pose this question to activate our collective understanding and consciousness to strengthen research in this environment in part due to the ever-changing social media landscape. At its core, the primary goal of the social media ecosystem is to keep users on the platform, interacting and engaging with content, for as long as possible, often at any cost [16]. Our hope is that the issues raised here will do the following: (1) contribute to frameworks that more clearly describe how vulnerability, much like privacy, consent, and confidentiality, is an essential concept for conducting ethical social media research; and (2) establish the need for partnerships with social media companies, supported through federal resources, that will facilitate collaborative yet independent research led by academic partners. While algorithms themselves are not nefarious, it is the intent behind the use of algorithms, and the goals and parameters set forth to use the algorithms in particular ways, that evokes concerns surrounding manipulation that may contribute to and enhance various vulnerabilities.

Conflicts of Interest

None declared.

  1. Subcommittee: protecting kids online: testimony from a Facebook whistleblower. U.S. Senate Committee on Commerce, Science, and Transportation. URL: https:/​/www.​commerce.senate.gov/​2021/​10/​protecting%20kids%20online:%20testimony%​20from%20a%20facebook%20whistleblower [accessed 2023-09-13]
  2. Rosenberg M, Confessore N, Cadwalladr C. How Trump consultants exploited the Facebook data of millions. The New York Times. URL: https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [accessed 2023-09-21]
  3. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. Jul 2020;31(7):770-780. [FREE Full text] [CrossRef] [Medline]
  4. United S, Public HS, Office OTSG. Social media and youth mental health. The U.S. Surgeon General’s Advisory. URL: https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf [accessed 2023-12-18]
  5. Algorithms and amplification: how social media platforms’ design choices shape our discourse and our minds. U.S. Senate Committee on the Judiciary. URL: https:/​/www.​judiciary.senate.gov/​committee-activity/​hearings/​algorithms-and-​amplification-how-social-media-platforms-design-choices-shape-our-discourse-and-our-minds [accessed 2023-09-13]
  6. Facebook, social media privacy, and the use and abuse of data. U.S. Senate Committee on the Judiciary. URL: https:/​/www.​judiciary.senate.gov/​committee-activity/​hearings/​facebook-social-media-privacy-and-the-use-and-abuse-of-data [accessed 2023-09-13]
  7. Bond S, Allyn B. Whistleblower tells Congress that Facebook products harm kids and democracy. NPR. URL: https:/​/www.​npr.org/​2021/​10/​05/​1043207218/​whistleblower-to-congress-facebook-products-harm-children-and-weaken-democracy [accessed 2023-09-13]
  8. González-Bailón S, Lazer D, Barberá P, Zhang M, Allcott H, Brown T, et al. Asymmetric ideological segregation in exposure to political news on Facebook. Science. Jul 28, 2023;381(6656):392-398. [CrossRef] [Medline]
  9. Nyhan B, Settle J, Thorson E, Wojcieszak M, Barberá P, Chen AY, et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature. Aug 2023;620(7972):137-144. [FREE Full text] [CrossRef] [Medline]
  10. Guess AM, Malhotra N, Pan J, Barberá P, Allcott H, Brown T, et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science. Jul 28, 2023;381(6656):398-404. [CrossRef] [Medline]
  11. Guess AM, Malhotra N, Pan J, Barberá P, Allcott H, Brown T, et al. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science. Jul 28, 2023;381(6656):404-408. [CrossRef] [Medline]
  12. Horwitz J. Facebook says its rules apply to all. Company documents reveal a secret elite that’s exempt. The Wall Street Journal. URL: https:/​/www.​wsj.com/​articles/​facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline [accessed 2023-09-13]
  13. Wells G, Horwitz J, Seetharaman D. Facebook knows Instagram is toxic for teen girls, company documents show. The Wall Street Journal. URL: https:/​/www.​wsj.com/​articles/​facebook-knows-instagram-is-toxic-for-teen-girls-company-​documents-show-11631620739?mod=hp_lead_pos7&mod=article_inline [accessed 2023-09-13]
  14. Hagey K, Horwitz J. Facebook tried to make its platform a healthier place. It got angrier instead. The Wall Street Journal. URL: https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=article_inline [accessed 2023-09-13]
  15. Scheck J, Purnell N, Horwitz J. Facebook employees Ffag drug cartels and human traffickers. The company's response is weak, documents show. The Wall Street Journal. URL: https:/​/www.​wsj.com/​articles/​facebook-drug-cartels-human-​traffickers-response-is-weak-documents-11631812953?mod=article_inline [accessed 2023-09-13]
  16. Fisher M. The chaos machine: the inside story of how social media rewired our minds and our world. Tijdschr voor Communicatiewetenschap. 2022;51(4):427-430. [CrossRef]
  17. Milan S, Treré E. The rise of the data poor: the COVID-19 pandemic seen from the margins. Soc Media Soc. Jul 2020;6(3):2056305120948233. [FREE Full text] [CrossRef] [Medline]
  18. Gillett R, Stardust Z, Burgess J. Safety for whom? Investigating how platforms frame and perform safety and harm interventions. Soc Media Soc. Dec 15, 2022;8(4):205630512211443. [FREE Full text] [CrossRef]
  19. Susser D. Ethical considerations for digitally targeted public health interventions. Am J Public Health. Oct 2020;110(S3):S290-S291. [CrossRef]
  20. Moreno MA, Goniu N, Moreno PS, Diekema D. Ethics of social media research: common concerns and practical considerations. Cyberpsychol Behav Soc Netw. Sep 2013;16(9):708-713. [FREE Full text] [CrossRef] [Medline]
  21. Charles-Smith LE, Reynolds TL, Cameron MA, Conway M, Lau EHY, Olsen JM, et al. Using social media for actionable disease surveillance and outbreak management: a systematic literature review. PLoS One. 2015;10(10):e0139701. [FREE Full text] [CrossRef] [Medline]
  22. Korda H, Itani Z. Harnessing social media for health promotion and behavior change. Health Promot Pract. Jan 2013;14(1):15-23. [CrossRef] [Medline]
  23. Jackson M, Brennan L, Parker L. The public health community's use of social media for policy advocacy: a scoping review and suggestions to advance the field. Public Health. Sep 2021;198:146-155. [CrossRef] [Medline]
  24. Brady SR, Young JA, McLeod DA. Utilizing digital advocacy in community organizing: lessons learned from organizing in virtual spaces to promote worker rights and economic justice. J Community Pract. May 18, 2015;23(2):255-273. [CrossRef]
  25. Broniatowski DA, Jamison AM, Qi S, AlKulaib L, Chen T, Benton A, et al. Weaponized health communication: twitter bots and Russian trolls amplify the vaccine debate. Am J Public Health. Oct 2018;108(10):1378-1384. [CrossRef] [Medline]

Edited by A Mavragani; submitted 12.06.23; peer-reviewed by J Cappella, J Alvarez-Galvez, E Miyagi; comments to author 03.08.23; revised version received 23.09.23; accepted 17.11.23; published 29.12.23.

Copyright

©Philip M Massey, Regan M Murray, Shawn C Chiang, Alex M Russell, Michael A Yudell. Originally published in JMIR Public Health and Surveillance (https://publichealth.jmir.org), 29.12.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited. The complete bibliographic information, a link to the original publication on https://publichealth.jmir.org, as well as this copyright and license information must be included.