Review
Abstract
Background: Despite the wide variety of studies that have focused on the recent COVID-19 infodemic, defining health mis- or disinformation remains a challenge due to the dynamic nature of the social media ecosystem and, in particular, the different terminologies from different fields of knowledge.
Objective: In this work, we aim to develop a conceptual framework of health misinformation during pandemic contexts that will enable the establishment of an interoperable definition of this concept and consequently a better management of these problems in the future.
Methods: We conducted a systematic review of reviews to develop a conceptual framework for health misinformation during the pandemic context as a case study.
Results: This review comprises 51 reviews from which we developed a conceptual framework that integrates 6 key domains—sources, drivers, content, dissemination channels, target audiences, and health-related effects of mis- or disinformation—offering a structured approach to analyze and categorize health misinformation. These 6 domains collectively form the basis of our proposed conceptual framework.
Conclusions: Our results highlight the complexity and multifaceted nature of health disinformation and underscore the need for a common language across disciplines addressing this global problem in order to use interoperable definitions and advance this evolving field of study. By offering a structured conceptual framework, we also provide a valuable foundation for interventions aimed at surveillance, public communication, and digital content moderation in future health emergencies.
doi:10.2196/62693
Keywords
Introduction
The rapid expansion of the internet and, more recently, the widespread adoption of social media platforms have led to an unprecedented acceleration in the production and dissemination of information. While these digital ecosystems have facilitated access to knowledge, they have also fostered the proliferation of misinformation. This phenomenon has increasingly drawn the attention of researchers, particularly within the social and behavioral sciences [,]. Over time, interest in the effects of misinformation on human behavior and social interaction has extended beyond these disciplines to fields such as health sciences, computer science, physics, and mathematics, where scholars seek to understand the cascading effects of false or misleading information [-]. The COVID-19 pandemic further amplified this issue, as the global health crisis created a fertile ground for the spread of misinformation. Uncertainty, urgency, and the high demand for scientific evidence contributed to the rapid circulation of misleading claims, ranging from misconceptions about disease prevention to false treatments and conspiracy theories [,]. Moreover, both political figures and scientists, whether intentionally or unintentionally, played a role in amplifying misinformation, sometimes by disseminating preliminary findings without sufficient scientific scrutiny [-]. The increasing presence of automated agents, such as social bots and artificial intelligence (AI) tools based on deep learning and large language models, has further complicated the landscape by enabling the mass production and dissemination of misleading content [,].
Studies on health misinformation have emerged as a specific subfield that focuses on social and health-related topics such as vaccines, epidemics and pandemics, drugs and new tobacco products, noncommunicable diseases, eating disorders and diets, and new health treatments [,]. However, although the most frequent topics of debate are relatively well identified, in scientific practice, we still encounter many difficulties in delimiting the concept of health misinformation, which is ultimately affecting the advancement of research in this realm of knowledge []. This underscores the need for a comprehensive conceptual framework capable of integrating the multifaceted elements of misinformation—its origins, pathways, and impacts—to provide both clarity and actionable insight. In fact, we can find all kinds of definitions depending on the field and focus of the study, the analytical approach, or the type of materials analyzed (eg, internet texts, tweets, posts, and online videos, among others) []. On the one hand, the highly dynamic nature of social media ecosystems makes it difficult to create a stable definition that characterizes all the agents involved, the topics of conversation, the channels of misinformation, and the respective health effects [,]. On the other hand, there is a differential use of terminology from different fields of knowledge that makes it difficult to configure a unitary language that allows us to identify the current evidence in this field of study [].
While the concept of health misinformation had been around since the early days of the first social media platforms [], the COVID-19 pandemic would take this term to a new dimension never known before. During this period, health misinformation has had a clear impact on the management of the health crisis in the different countries and population groups [,], but it has also significantly affected the scientific field and, particularly, the way scientific evidence is generated. Indeed, the need to generate rapid evidence that would take us out of the context of the health crisis has contributed to an increase in the rate of scientific production, in some cases reducing the quality of studies []. With the pandemic, studies based on opportunistic samples, rapid reviews, and studies led by scientists with no previous experience in the field of public health proliferated, which, despite having an altruistic purpose, may have indirectly contributed to the increase in misinformation on health issues fundamental to the proper management of the health crisis [-].
Consequently, although research on health misinformation and its impact on health systems has become more prevalent, clarifying the concept continues to be difficult given the constantly changing landscape of social media and the wide array of health-related issues it involves, especially within the COVID-19 context []. To navigate this complexity, we adopt an inclusive term of health misinformation in this study, considering any health-related claim that is based on anecdotal evidence, lacks scientific validation, or is misleading due to the absence of empirical support [], regardless of whether it was disseminated intentionally or unintentionally. Accordingly, our definition distinguishes between 2 primary forms: misinformation, which consists of false information shared without the intent to cause harm [], and disinformation or malinformation, which involves the deliberate creation or manipulation of false or partially true information with the purpose of misleading or harming specific individuals, social groups, institutions, or nations [,].
Although efforts have been made in the existing literature to construct both conceptual frameworks and taxonomies on misinformation, for example, linked to specific topics such as vaccines [,] or preventive measures to combat misinformation [-], to the best of our knowledge, there is still no conceptual model that allows for an adequate operationalization of the concept of health misinformation. To address this research gap, the present study aims to develop a comprehensive conceptual framework for health misinformation in the context of pandemics. This tool seeks to provide an interoperable definition that integrates the key dimensions of misinformation—sources, drivers, message content, dissemination channels, audiences, and health-related effects—while addressing the interdisciplinary nature of the problem. By offering a structured approach to understanding and categorizing misinformation, this study contributes to the advancement of research in this field and provides a foundation for the development of strategies to manage and mitigate misinformation in future health crises. In particular, we selected the case of COVID-19 for several reasons: (1) the wide range of agents that have been involved in the pandemic (health professionals, academics, politicians, influencers, etc); (2) the broad impact that health misinformation has had as global health phenomenon; and (3) the usefulness of a conceptual framework that can serve as a basis for the semantic interoperability of the term in interdisciplinary scientific fields and for the management of future health emergencies. Thus, the final purpose is to lay the foundations of a taxonomy of pandemic misinformation that can be extended and adjusted to other health topics, but specifically in the context of health crises.
To carry out the present objective, a systematic review of reviews was conducted (including systematic reviews, qualitative synthesis, scoping reviews, among others), from which the fundamental dimensions that would compose our conceptual framework of health misinformation would be extracted.
Methods
Study Design
The present systematic review of reviews was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [].
Search Strategy
We searched in the following databases: PubMed, Scopus, Web of Science, Ovid, and Cochrane in January 2023, based on the peak of COVID-19 misinformation literature. The search was limited to papers published before January 10, 2023. The search strategy consisted of 2 parts: 1 focused on COVID-19 and the other on misinformation using Boolean terms. The following keywords to develop the search strategy were used: (1) for COVID-19: “covid 19”, “covid-19”, “sars-cov-2 infection”, “2019 novel coronavirus disease”, “2019 novel coronavirus infection”, “2019-ncov disease”, “2019 ncov disease”, “2019-ncov diseases”, “covid-19 virus infection,” “covid 19 virus infection”, “covid-19 virus infections”, “coronavirus disease 2019”, “coronavirus disease-19”, “coronavirus disease 19”, “severe acute respiratory syndrome coronavirus 2 infection”, “sars coronavirus 2 infection”, “covid-19 virus disease”, “covid 19 virus disease”, “covid-19 virus diseases”, “disease, covid-19 virus”, “2019-ncov infection”, “2019 ncov infection”, “2019-ncov infections”, “covid19”, “covid-19 pandemic”, “covid 19 pandemic”, “covid-19 pandemics”; (2) for Misinformation: “inaccurate information”, “misleading information”, “seeking information”, “rumour”, “rumor”, “gossip”, “hoax”, “urban legend”, “urban legends”, “myth”, “fallacy”, “fallacies”, “conspiracy theories”, “conspiracy theory”, “malinformation”, “disinformation”, “misinformation”.
The search strategy was adapted to each database, and the key terms used in each database are summarized in .
Eligibility Criteria
The inclusion criteria encompassed studies addressing misinformation related to health within the pandemic context, covering topics such as COVID-19, vaccination, virus-related treatments, policies, and humorous content such as jokes and memes. Reviews published in peer-reviewed journals and written in English, Spanish, and French were considered eligible. Accepted types of literature reviews included narrative, qualitative, systematic, meta-analysis, scoping, and umbrella reviews that provided insights into information sources, messages, channels, audiences, or health outcomes.
Exclusion criteria involved nonresearch papers in scientific journals (abstracts, commentaries, opinions, editorials, letters, and books) and nonscientific content reviews, including those from social media, blogs, and newspapers. Additionally, publications that did not satisfy the inclusion criteria were excluded.
Data Extraction
The papers found were exported to Microsoft Excel 365 (Microsoft Corp) for duplicate removal and screening. Titles and abstracts were first examined by 2 reviewers (EO-M and JC-B) independently. These reviewers selected studies based on the inclusion criteria if the title or abstract contained pandemic, health, or misinformation-related terms. Papers without an abstract went directly to full-text screening. After this screening, the results were blinded so that researchers could discuss contradictions until agreement was reached. Subsequently, the first 30% of the full-text studies were reviewed twice to establish a uniform method. In case of disagreement, this was resolved by discussion and consensus among the reviewers. In case of persistent disagreement between reviewers (EO-M and JC-B), an additional protocol involving an external reviewer was applied (JA-G). Once the method to be followed was clear and defined, 4 reviewers (BR-F, CL-F, EO-M, and JC-B) screened the full text according to the inclusion and exclusion criteria. The references of the included papers were also screened to find new studies that met the selection criteria.
Data Synthesis
Descriptive information and information on misinformation during the COVID-19 pandemic were extracted from the selected papers in 2 tables. The first table included the title, authors, topics covered, year of publication, objectives, and a summary of the authors’ conclusions. For studies that did not include the conclusions in the abstract, a summary of the conclusions was made, taking into account the most relevant aspects. On the other hand, the second table collected information on misinformation during the COVID-19 pandemic: title, source, message, channel, audience, and result of the misinformation (). To perform the analysis of the misinformation data in this second table, each of its elements was classified. For this classification, some categories have been combined because of their similarity. The mass media category, for example, covers newspapers, radio, and television. Within communication channels, certain social media platforms have also been grouped, Twitter (subsequently rebranded X; X Corp) includes Twitter, Gab (Gab AI, Inc), Weibo (Weibo Corporation), and Parler; Instagram (Meta) includes Snapchat (Snap Inc) and Pinterest; Facebook (Meta) combines Facebook and Mondo Sporco; while WhatsApp (Meta) combines WhatsApp, Telegram (Telegram Messenger Inc), and WeChat. Similarly, in terms of the consequences or impact of reduced trust, a group was formed to reduce trust in government, scientists, and medical professionals.
The development of the conceptual framework was guided by a thematic synthesis approach, involving iterative coding and integration of topics across included reviews (ie, papers mentioning misinformation sources, specific message content, drivers, channels, effects, etc). Through consensus discussions among the research team, recurring patterns were grouped into 6 dimensions that represent the underlying mechanisms and manifestations of health misinformation.
Quality Assessment of Included Reviews
To evaluate the methodological quality of the studies included in this review, we used the QUEST (Quality Evaluation of Scientific Texts) Checklist [], a comprehensive appraisal tool specifically designed to assess the rigor and transparency of empirical research across diverse methodological designs, including quantitative, qualitative, and mixed-methods studies. QUEST comprises 20 items grouped into 6 core domains: theoretical framework, research questions and objectives, methods, results, discussion, and additional information. Each item is scored as “yes,” “no,” or “not applicable,” enabling a structured assessment of key quality indicators such as theoretical grounding, appropriateness of study design, clarity of analytical techniques, reporting of results, discussion of limitations, and ethical considerations. This instrument was chosen for its adaptability to interdisciplinary research designs.
Conceptual Framework
Based on the references of this review, we developed a conceptual framework. With our results, we identified the main dimensions and subdimensions addressing misinformation during the COVID-19 health crisis. All subdimensions were then ranked (assigning a corresponding score) according to the relevance within their dimension and their potential impact on the level of misinformation. We determined the relevance by considering the percentage of results within their respective dimensions. Additionally, the research team assessed the potential impact on the level of misinformation of the population, considering the benchmarks of our review. Each of the items in the conceptual framework included a score that would be agreed upon by the research team that intervened in the development of the systematic review.
The conceptual framework was developed by categorizing misinformation sources based on their profile (human expert vs nonexpert and declared bot vs nondeclared) to capture their distinct roles in the dissemination process. Within bot-generated content, a key distinction was made between declared and nondeclared bots, as previous studies have shown that declared bots maintained a neutral tone during the pandemic, whereas nondeclared bots were more likely to spread negative narratives, criticize public health measures, and promote misinformation [,]. Additionally, the audience dimension incorporates health literacy levels, recognizing the higher susceptibility of vulnerable groups to misinformation. This integration is supported by existing research, which has consistently demonstrated that lower levels of health literacy are strongly associated with greater exposure to and belief in health-related misinformation. By structuring the framework around these factors, this model provides a comprehensive approach to understanding the pathways through which misinformation spreads and impacts public health decision-making [,].
Results
Selection of Studies
We identified a total of 12,794 results, from which 5402 remained after removing duplicates (). Then, we screened titles and abstracts, selecting 556 papers for a full review. Finally, after a full-text examination, 51 papers were included in our review. Applying the QUEST instrument to our sample of included reviews, we found that most studies demonstrated high methodological quality. Of the evaluated papers, the mean number of criteria satisfied (“yes”) was 16.4 of 20, with only 2 “no” responses and 1.8 “not applicable” on average per study. Most studies provided adequate conceptual frameworks, clearly stated research questions and aims, and appropriately described their methods and analyses. However, some limitations were observed, particularly in the justification of sample sizes and the explicit reporting of data availability for replication. Detailed quality assessment results for each study are provided in .

This instrument demonstrated high reliability, with reported intraclass correlation coefficients exceeding 0.98 and a Cronbach α of 0.99, indicating excellent interrater consistency and internal coherence.
provides a visual depiction of the different categories that make up the 6 domains (or dimensions) that we identified as those characterizing the concept of health misinformation during the pandemic context. This group of elements is composed of (1) the different sources of misinformation, (2) the drivers, (3) the specific messages, (4) the propagation channels, (5) the various audiences, and (6) the respective social and health effects. Around this set composed of 6 dimensions, we detected a wide variability of health misinformation traits that, in this specific context, are indistinctly mentioned and combined among the 51 papers selected.

Sources of Health Misinformation
One of the most critical aspects of health misinformation is identifying its origin. During the COVID-19 pandemic, the sources of health misinformation that were most frequently mentioned in the studies located in the literature were those related to governmental and political actors (26%) [-]. Among them, Donald Trump was mentioned in several papers as a figure in the spread of misinformation [,,,]. It was followed by individuals linked to protest movements (13%) [,-,,,-], composed mainly of antivaxxers, and scientists (12%) [,,,,,,,-] (including the Nobel Prize winner Luc Montagnier []). However, other sources were also mentioned, such as famous people (11%) [,,,,,,-], (declared and nondeclared) social bots or new AI tools (8%) [,,-], health organization experts (7%) [,,,,,], mass media opinion leaders (6%) [,,,,], religious leaders (6%) [,-,], people of low socioeconomic status (4%) [,-,], and social media community members (4%) [,,].
Drivers of Health Misinformation
The studies highlight several underlying motivations behind the dissemination of health misinformation. The drivers of health misinformation during the COVID-19 pandemic were fundamentally divided between the achievement of political interest (22%) [,,,,,,,,,], economic interest (22%) [,,,,,,-] (eg, saying that the CDC is exaggerating the pandemic to damage Donald Trump’s reputation or selling fraudulent products, respectively [,,]), and psychological satisfaction (16%) [,,,,,,] or social status (11%) [,,,,] (eg, this is the case of influencers or social media users that tried to gain attention in these new platforms []), which, to a certain extent, corresponded to the typology of topics mentioned as sources of misinformation in the previous section. However, misinformation was also observed as a driver of health prevention from both (scientific or health) experts, policymakers, and health organizations (11%) [,,,,], that is, misleading, false, and erroneous information that could possibly be produced unintentionally [,,]. In this sense, we found a clear division between misinformation for selfish purposes (ie, for personal or institutional gain in a context of crisis) and misinformation for altruistic purposes (ie, for helping others). Among the reasons for misinformation was also mentioned the need of troll users to damage the reputation, create chaos, directly annoy others (9%) [,,,], or entertain (9%) [,,,].
In short, while informative drivers showed a lower risk of misinformation, others, such as economic, political (which could have a greater impact on the opinions of the population), or those that were directly aimed at generating chaos and damaging the image of individuals, groups, or institutions, presented a greater risk.
Content of Health Misinformation Messages
The health misinformation circulating during the COVID-19 pandemic covered a wide range of health-related topics. In the contents of the messages, misinformation was observed in relation to vaccines (24%) [,,-,,,,,-,-,,-,-], where several papers related vaccines to the insertion of microchips with 5G [,,,,,] or the possible side effects. Furthermore, among the prevalent misinformation were various theories about the origin of COVID-19 (19%) [,,,,,,,,,,-,,,,,,,-,,,], including unfounded claims suggesting that COVID-19 is a biological weapon [,,,,,] or is produced by the 5G network [,]. Additionally, we found health misinformation about certain treatments and home remedies (17%) [-, , , , , -, , , , , , , , , , , -] such as the use of herbal medicines [,,,] or even the use of cow urine [,]. Among these home remedies, there were also references to the use of sodium hypochlorite (ie, bleach) and alcohols to prevent COVID-19 infection [,,,,] or even to eliminate its effects after being infected [,,]. Other relevant topics were the possible preventive measures (11%) [,-,,,-,,,,,-], theories about the transmission of the new disease (11%) [,-,,,-,,,,,-], denialist messages about COVID-19 (8%) [,,,,,,,,,,], theories about immunity (6%) [,,-,,,], including the arguments about the supposed natural immunity [,,] or ethnic immunity [,,,] of certain groups (ie, vegan immunity []), about diagnoses (2%) [,,] and even with religious content (1%) [,], specifically about religious conspiracies, such as the false claim that the COVID-19 vaccine contained gelatin derived from pigs, leading to concerns among Muslim communities [].
Among the “less harmful” messages, reference was made to humorous topics (satire or parody), which, although not generally intended to cause harm [,,,], had a certain potential to mislead, so that humor could also be used to intentionally spread rumors and conspiracies in a simple way. Other studies referred to false connections (when, as in the case of clickbait, headlines do not support the content), misleading content to frame information (selective selection of images, quotes, or statistics) or false contexts (when genuine content is shared with false contextual information) [,,,], practices that, as some studies point out, seem to undermine trust in the media and promote polarization. At a higher level of misinformation, studies were also located that mention of impostor content (the impersonation of genuine sources, for example, through phishing and smishing techniques), direct manipulation of content (deliberate modification of information or images with the aim of deceiving) or, in the most extreme cases, content fabrication, which would involve the creation of content that is totally false and that is created with the purpose of deceiving and causing harm (this, for example, would be the case of the so-called deepfakes) [,].
Health Misinformation Dissemination Channels
A key factor in understanding health misinformation is analyzing how it spreads. Among the main channels of health misinformation, social media were frequently mentioned, which, as a whole, were identified by 80% of the studies analyzed. Many studies referred to social media platforms as the main health misinformation dissemination channel (22%) [,-,,,,-,-,-,,,,,-,,,,]; however, this percentage varies depending on specific platforms such as Twitter (15%) [, , , , , , , , , , , -, -, ], Facebook (13%) [, , , , , -, , , , , -, , , ], YouTube (Google LLC; 12%) [, , , , -, , , , , , , , , , , ], WhatsApp and Telegram (8%) [, , , , , , , , , , , , ], Instagram (5%) [, , , , , , , ], TikTok (5%) [, , , , , -] and Reddit (Reddit, Inc; 4%) [, , , , , ]. Reference was also made to mass media news (9%) [, , , , , , , , , , , , , , ], misinformation spread through blogs (3%) [, , , , ], direct contacts in face-to-face relationships (2%) [, , , ], and other sources (2%) [, , , ], among which scientific media or web-based platforms such as LinkedIn were also mentioned [].
In the case of channels, it was observed that much of the misinformation fell within the framework of social networks and different online platforms, while it was less common among official sources such as health organizations and the media.
Target Audiences
Although exposure to health misinformation is transversal to different social groups, as also indicated by different studies (13% of the studies pointed to the general population [,,-,,,,,,,,,]), a greater predisposition to misinformation was also identified among socially vulnerable population groups. Thus, for example, 20% of the studies pointed to groups with low socioeconomic status [,,,,,,,,,,,,,,,,,,,,], ethnic minorities (10%) [,,,,,,,,,,], high religiosity groups with low scientific literacy (9%) [,,,-,,,], young people because of their greater exposure to online media (9%) [,,,,,,,,,], extreme ideological groups (ie, commonly partisan individuals who are ideologically positioned and less critical of their own groups; 9%) [,,,,,,,,,], people with low scientific and health literacy (8%) [,,,,,,,,], women (6%) [,,,,,,], people with illness or disability (6%) [,,,,,], older adults (5%) [,,,,], while others pointed to the greater exposure of men (4%) [,,,]. In this sense, despite the generalized susceptibility of the population to health misinformation, a higher risk was observed in groups with lower socioeconomic status (including vulnerable groups such as ethnic minorities) and religious positions (9%), as well as in those individuals who were more exposed to extreme political and lower scientific literacy (8%).
Among the various types of audiences, health literacy was observed as a fundamental articulating variable when processing and evaluating the various health contents. For this reason, to facilitate the classification in the conceptual framework, we would opt for this indicator divided into 4 categories, ranging from 0 health literacy (0 points) to high health literacy (4 points).
Social and Health Effects of Misinformation
The impact of health misinformation extends beyond simple misconceptions, influencing both social trust and individual behaviors. The effects of health misinformation identified in the specialized literature were varied. Among the most relevant impacts were identified the increase in doubts about vaccination (13%) [,,,,,,,,,,,,,,-,], as well as the violation of the norms established by political agents (11%) [,,,-,,,,-,,,,], the generalized reduction of trust in government, politicians, health institutions, and scientists (10%) [,,-,,,,,,,,,,,], and the increase in vaccine refusal behavior (8%) [,-,,,,,,,,,,]. However, as a whole, the significant effects that health misinformation may have had on the mental health of the population were also noted. The different effects included an increase in fear and panic among the population (9%) [, , , , , , , , , , , , , , ], anxiety (7%) [,,,,,,,,,,,], incorrect health decision-making [,,,,,,,,,,,,] (8%, eg, an increase in advanced lung cancer cases among patients due to hesitancy to approach health care facilities is seen []), stress (6%) [,,,,,,,,], depression (3%) [,,,,,], changes in risk perception (3%) [,,,,], confusion (2%) [,,,], worsening of general mental health (2%) [,,], increase in suicides (2%) [,,,], loneliness (1%) [,], and addictions (1%) [,], problems related to mental health which together accounted for 44% of the impacts identified. However, the impact of health misinformation on discrimination (3%) [,,,,], propagation of erroneous or poor quality health information (3%) [,,,,], conspiracy theories about the COVID-19 pandemic (3%) [,,,,,], impulsive buying of food and consumables (2%) [,,,], or shortage of medicines (2%) [,,,] were also mentioned.
Conceptual Framework
presents a conceptual framework on COVID-19 misinformation that comprehensively systematizes the 6 identified dimensions characterizing the various types of health misinformation that have emerged during the recent pandemic: (1) sources; (2) drivers; (3) types; (4) channels; (5) audiences; and (6) health misinformation effects on COVID-19. For the development of these dimensions, the results of this review were considered, as well as some references for the source and audience. First, the sources are divided into human sources [,,,,,] and artificial sources (social bots or AI) [,,,-], both of which may or may not be experts. For example, in the case of artificial sources, we would distinguish between declared and nondeclared artificial sources, that is, social bots or AIs that could be declared (self-identifying as bots, usually for informational purposes) and undeclared (those acting for some covert purpose). This basic distinction would allow us to differentiate between expert and nonexpert sources, while also differentiating the positive work of social bots (eg, at the level of providing information) versus the negative effects that bots or covert AIs might generate.

In the case of drivers, we would move on a continuum ranging from offering health information with the goal of helping [,,,,] to the extreme case of trolling [,,,] whose objective would be to create chaos (ie, damaging reputation or annoying others, sometimes without even a clear interest beyond personal satisfaction). The humorous messages [,,,] would be defined as the least harmful, assuming that audiences would be able to grasp the double meaning of the information (although logically it is not free of misunderstandings), while at the other extreme, we would have content fabrication. The channel would be divided along a gradient from information from health expert communication channels to open use of information by non–health expert groups, covering the following categories: social media [,,,,,], health websites, health apps, health forums, mass media [,,,], and health authorities, as the least likely to misinform. Taking into account the wide diversity of profiles of the misinformation target groups, the types of audiences would be delimited by the level of health literacy, and the effects would be as follows: no effects, cognitive effects, affective effects, behavioral effects, and, finally, direct impact on physical and mental health [,,,,,,]. provides a full operational description of the 6 dimensions.
| Dimension, application, and category | Example | |||
| Source | ||||
| Source profiling and classification to detect recurrent misinformation actors | ||||
| Undeclared bot or AIa | Automated Twitter account spreading antivaccine narratives | |||
| Declared bot or AI | Verified health bot disseminating WHOb updates | |||
| Nonexpert human | Celebrity sharing unverified home remedy | |||
| Expert human | Physician issuing early advice without evidence | |||
| Drivers | ||||
| Intent detection via motive-based message tagging | ||||
| Create chaos or trolling | Troll account spreading fake death rates to incite panic | |||
| Political | Post undermining public health policy during election | |||
| Economic | Promoting false cure to sell supplements | |||
| Psychological | Seeking attention via viral misinformation | |||
| Informative | Health worker unintentionally sharing outdated guidelines | |||
| Message | ||||
| Risk-level classification of misinformation types based on verifiability and fabrication | ||||
| Fabricated content | Claim that vaccines contain surveillance chips | |||
| Manipulated content | Edited video misrepresenting a public health official | |||
| Imposter content | Fake website mimicking the CDCc logo | |||
| False context | Old photo presented as a current event | |||
| Misleading content | Selective use of statistics to discredit vaccines | |||
| False connection | Clickbait headline about vaccine danger | |||
| Satire or parody | Humorous meme misinterpreted as fact | |||
| Channel | ||||
| Channel mapping and platform-specific risk alerting | ||||
| Social media | Facebook groups spreading antimask claims | |||
| Health websites | Alternative health blog promoting nonevidence-based remedies | |||
| Health apps | Unregulated app suggesting treatments based on symptoms | |||
| Health forums | Discussion thread recommending dangerous practices | |||
| Mass media | Sensationalist television report linking vaccines to infertility | |||
| Health authorities | Official health bulletin (less likely to misinform) | |||
| Audience | ||||
| Segmentation of target populations by health literacy or vulnerability | ||||
| Not health literate or fanatic | Users in conspiracy groups believing 5G causes COVID-19 | |||
| Lowly health literate | Older adult individuals forwarding chain messages on WhatsApp | |||
| Mid health literate | General public with moderate science education | |||
| Highly health literate | Medical professionals engaging in fact-checking | |||
| Outcome | ||||
| Monitoring behavioral, emotional, or cognitive impacts postexposure | ||||
| Health outcome | Delayed cancer diagnosis due to fear of hospitals | |||
| Behavioral effect | Refusal to wear masks or get vaccinated | |||
| Affective effect | Anxiety and panic from repeated exposure to false claims | |||
| Cognitive effect | Confusion about the safety of vaccines | |||
| No effect | Message ignored or dismissed due to lack of credibility | |||
aAI: artificial intelligence.
bWHO: World Health Organization.
cCDC: Centers for Disease Control and Prevention.
According to the different dimensions and categories, the resulting conceptual framework assigns numerical scores to different elements to represent their relative impact on the spread and consequences of health misinformation. These scores range from 0 to 20, where 0 indicates no impact and 20 represents the highest level of health misinformation risk. The source of misinformation is assigned a baseline score (0), acknowledging that different actors—whether human or artificial—can promote misinformation depending on their nonexpertise or undeclared character (1). Drivers of misinformation go from unintentional misinformative motives (0) to specific motives for deceiving or harming others (3). Message types are ranked on a scale from 0 to 6, with satirical or parodic content receiving the lowest score (0) due to its lower likelihood of deception, while fabricated content ranks highest (6) as it represents fully false and intentionally misleading information. Dissemination channels are evaluated between 0 and 3, with health authorities at the lower end of the spectrum (formal channels) and social media at the upper end (informal channels), highlighting their dominant role in the rapid and widespread diffusion of misinformation. The audience’s health literacy level also affects susceptibility, with individuals with high health literacy scoring the lowest (0) in comparison with fanatic audiences (3) in terms of vulnerability to misinformation. Finally, the outcomes of misinformation are structured into 4 levels, ranging from no effect (0) to severe health outcomes (4), which include cognitive, affective, and behavioral consequences that can ultimately lead to detrimental health decisions. As a rule, when a misinformation message falls into multiple categories within a given domain (eg, both mass media and social media as sources), the highest applicable score within the framework should be used. This approach ensures that the scoring system provides a consistent and quantifiable assessment of misinformation dynamics, thereby facilitating the identification of the most critical intervention points for mitigating its impact.
presents concrete examples of health misinformation messages and their classification according to the proposed conceptual framework. It also includes an estimated risk score based on the scale defined in , indicating their relative harmfulness across dimensions.
Based on the application of the conceptual framework to the examples in the table, the severity and potential impact of different types of health misinformation can be assessed. First, messages that combine high fabrication in content, dissemination through high-reach channels (eg, social media), and targeting of vulnerable audiences (eg, low health literacy or fanatic groups) consistently result in higher total risk scores. These examples, such as vaccine microchip conspiracies or bleach remedies, not only pose direct health risks but also undermine public trust and amplify uncertainty. Second, the source and driver dimensions are critical in distinguishing between unintentional misinformation (eg, experts presenting misinformative messages) and strategically crafted disinformation, with politically, economically, or trolling motivated cases scoring significantly higher (eg, misinformative content). Third, the framework presents content validity in differentiating between harmful and low-impact content—evidenced by the much lower score of a satirical meme, which carries minimal risk when its humorous intent is recognizable. Overall, the framework enables a nuanced evaluation that integrates both structural (source or channel) and psychosocial (audience or effect) factors, providing a robust foundation for prioritizing intervention strategies.
| Misinformation example and domain and category | Examples | Risk score | |||
| 1. “COVID-19 vaccines contain microchips for surveillance” (social bot, source: Telegram, in a low health literate group) | |||||
| Source | Undeclared bots | 1 | |||
| Driver | Political distrust | 2 | |||
| Content | Fabricated conspiracy | 6 | |||
| Channel | Telegram and Facebook | 3 | |||
| Audience | Low health literacy | 2 | |||
| Effect | Vaccine hesitancy | 3 | |||
| MRAS | Total | 17 | |||
| 2. “Bill Gates, pharmaceutical companies, 5G, vaccines, chips... it’s all connected. They want to control us.” (singer, source: Instagram, Facebook, and Twitter) | |||||
| Source | Famous person (nonexpert) | 1 | |||
| Driver | Psychological | 1 | |||
| Content | Fabricated conspiracy | 6 | |||
| Channel | Instagram, Facebook, and Twitter | 3 | |||
| Audience | General public (mid health literates) | 1 | |||
| Effect | Vaccine hesitancy | 3 | |||
| MRAS | Total | 15 | |||
| 3. “Hydroxychloroquine—I don’t know, it’s looking like it’s having some good results. That would be a phenomenal thing.” (politician, source: Twitter) | |||||
| Source | Political figures (nonexpert) | 1 | |||
| Driver | Political agenda | 2 | |||
| Content | Conspiracy theory | 6 | |||
| Channel | Mass or social media | 3 | |||
| Audience | Polarized groups (fanatic) | 3 | |||
| Effect | Polarization and distrust | 3 | |||
| MRAS | Total | 18 | |||
| 4. “COVID-19 is turning people into zombies” (humorist, source: television comedy) | |||||
| Source | Social media users (nonexpert) | 1 | |||
| Driver | Informative | 0 | |||
| Content | Satirical misinformation | 0 | |||
| Channel | Instagram and Twitter | 3 | |||
| Audience | General public (mid health literates) | 1 | |||
| Effect | Misinterpretation | 1 | |||
| MRAS | Total | 6 | |||
aMRAS: Misinformation Risk Assessment Scores.
Discussion
Principal Findings
Our systematic review of reviews of health misinformation during the COVID-19 pandemic provides a comprehensive characterization of the sources, drivers, message content, channels, audiences, and health and social outcomes associated with the spread of misinformation. The findings presented in this paper highlight the complexity and multifaceted nature of health misinformation, emphasizing the need for the use of a common language among the various disciplines addressing this global problem to use interoperable definitions.
The prominence of government and political actors, especially the mention of Donald Trump, as major sources of misinformation underscores the polarizing capacity of political figures on public perception []. Additionally, the influence of protest movements, scientific publications, social bots and AI, celebrities, health organizations, and mass media evidences the great diversity of sources that can contribute to the spread of misinformation [,,-]. This highlights the need to recognize these varied origins to develop interventions aimed at different social groups (eg, by limiting access to underage groups, through fact-checking of content, or the establishment of rules against the deliberate creation and manipulation of content []). However, it is striking to note the confusion between misinformation sources and misinformation channels. Health studies refer to social networks as “misinformation sources,” whereas, from the perspective of communication sciences and social sciences, they could be considered “misinformation channels.” In this sense, the need for a clear conceptualization becomes evident to address the problem adequately [], which points to the need to incorporate training programs aimed at the adequate dissemination of health information by professionals [].
In the case of drivers, the identification of political, economic, and psychological motivations as determinants of health misinformation in the context of the recent pandemic reaffirms the interconnectedness of misinformation with broader societal issues (national elections, political debates, new markets and products, viral marketing strategies, and new ways of working and relating to others on social networking platforms). Similarly, it is worth highlighting the relevance of the unintentional production of misinformation for disease prevention and health promotion (including from the perspective of health experts). Although the ultimate goal of these forms of misinformation is usually altruistic (assuming that the aim of professionals is to help), it can also have a significant impact on opinions, attitudes, and behaviors that, directly or indirectly, can affect the health of the population when the evidence is erroneous [,].
The wide variety of misinformation, covering topics such as vaccines, treatments, the origin of COVID-19, preventive measures, transmission theories, denialist messages, immunity, diagnoses, and religious content, underscores the adaptability of misinformation to different dimensions of the pandemic []. This wide range of misinformation highlights the multifaceted nature of the infodemic [], where false or misleading information infiltrates various aspects of public health discourse. The variety of issues uniquely linked to the pandemic demonstrates the difficulty of developing a single definition of health misinformation [], a fact that again underscores the need for a shared conceptual framework upon which to advance the development of concrete and tailored definitions of different health issues susceptible to misinformation (vaccines, eating disorders, treatments, among others). These definitions should not only recognize the complexity of the sources and drivers of misinformation but also provide an appropriate framework for developing effective interventions to mitigate the impact of misinformation on health [].
The significant role played by social media platforms, including Twitter, Facebook, YouTube, WhatsApp, and Telegram, in the spread of misinformation underscores the pressing need for collaborative efforts between these platforms and regulatory authorities to effectively mitigate the spread of false information. The dynamic social media ecosystem demands a nuanced understanding of the intricate relationship between traditional mass media and emerging social networking platforms [,]. While these platforms offer unprecedented reach and immediacy, they also present challenges related to the rapid circulation of information, the amplification of certain narratives, and the potential for the viral spread of misinformation []. In navigating this complex landscape, it is crucial to recognize that social media platforms operate as influential channels for information consumption, shaping public perceptions and influencing attitudes on a global scale []. As observed in this study, efforts to address misinformation should not only focus on these platforms in isolation but also consider the broader media ecosystem, where mass media news plays a role alongside social media, and direct interpersonal communication. In this context, a holistic approach to addressing health misinformation involves collaboration between social media platforms, traditional media outlets, health authorities, and other stakeholders [,]. This collaboration is essential for developing comprehensive strategies that encompass content moderation, fact-checking mechanisms, and educational campaigns aimed at promoting media literacy []. Understanding the synergies and interplay between mass media and social media is fundamental for creating effective interventions that can withstand the challenges posed by the rapid dissemination of information in the digital age []. Therefore, an integral and collaborative approach is necessary to build resilience against the detrimental effects of misinformation on public perception and, ultimately, public health.
Health misinformation affects the entire population, but it is crucial to recognize specific vulnerable groups, such as those with low incomes, ethnic minorities, and people with low scientific literacy. These groups face unique challenges due to socioeconomic disparities, cultural differences, and unequal access to quality education and scientific knowledge. Therefore, addressing misinformation in these groups requires adapted strategies considering the intersectionality of their social profiles []. For example, people with low socioeconomic status may encounter barriers to accessing accurate information, ethnic minorities may face cultural misunderstandings, or even more educated people may sometimes assume certain information arrogantly and uncritically. Thus, understanding the differential susceptibility among these population groups is critical to designing targeted interventions and, ultimately, building their resilience against misinformation.
Additionally, the diverse and wide-ranging effects of health misinformation on trust in institutions, vaccination rates, adherence to (sometimes misguided) treatments, adherence to health norms, and mental health highlight the need to address this problem from a global perspective. The substantial effects on mental health outcomes, including fear, stress, anxiety, and depression (especially in young groups), emphasize the urgent need for mental health support services in future infodemics.
Given the growing risk of future infodemics, our proposed conceptual framework offers not only a systematized understanding of health misinformation but also a foundation for developing actionable tools. It can inform the design of real-time surveillance systems that monitor emerging misinformation by categorizing it according to its source, content, or dissemination channel. Similarly, it provides valuable guidance for crafting targeted public health messaging aimed at vulnerable audiences identified as being at greater risk. The framework also has applications in enhancing content moderation systems by enabling the detection of high-impact misinformation patterns—for instance, fabricated claims spread by undeclared bots directed at individuals with low health literacy. By classifying misinformation across its key dimensions, more precise and effective countermeasures can be implemented to limit its reach and mitigate its consequences. These practical applications underscore the framework’s relevance for navigating the complex and evolving landscape of digital misinformation. Despite increased scholarly interest, few studies have contextualized the problem with such specificity. Existing models tend to focus on isolated aspects or topics, whereas our framework introduces an integrated and operational structure applicable to both academic analysis and intervention strategies [-]. Therefore, our framework enables systematic classification across all 6 domains, facilitating tailored responses such as educational initiatives, content moderation, or targeted fact-checking.
Our study contributes to the current debate on misinformation by offering a specific health-oriented conceptual framework, contrasting with broader approaches focused on science-related misinformation [,]. While the academic consensus has identified the multiple misinformation sources, dissemination mechanisms, and impacts of misinformation on science, our work refines this understanding by specifically addressing the health crisis context and, in particular, the impact of health misinformation on attitudinal and health outcomes at the individual level. While previous studies have developed taxonomies and conceptual frameworks on misinformation, often focusing on general classifications, media dissemination, or the risks associated with medical communication and social media, this work proposes a more integrative approach by providing a structured framework specifically tailored to health misinformation, which underscores the necessity of a common language and semantic interoperability. Therefore, this conceptual framework has the potential to enhance the management of current and future health emergencies (ie, epidemics, pandemics, natural disasters, and humanitarian crises) by facilitating the rapid identification and classification of health misinformation dimensions within the evolving digital communication landscape.
Beyond identifying the 6 individual dimensions, our findings suggest important interrelations among these elements that shape the dynamics of health misinformation. For instance, certain sources (such as political figures or nondeclared bots) often align with specific drivers such as political or economic agendas, thereby amplifying misleading content through high-reach channels such as social media. Similarly, vulnerable audiences characterized by low health literacy may be disproportionately exposed to complex misinformation narratives, intensifying affective and behavioral health impacts. These interconnected pathways underscore that the spread and impact of health misinformation are not merely additive but synergistic, with drivers and channels interacting to reinforce certain message types and ultimately influence trust, perceptions, and health behaviors. Recognizing these interdependencies provides a more holistic understanding of how misinformation ecosystems function, which is essential for designing targeted, multidimensional interventions.
Limitations and Strengths
This study has several limitations. First, given the rapid advancement of technology, recent publications on bots, AI, or large language models may have emerged after the completion of our review. Second, while misinformation studies have historical roots dating back to the early 20th century, their interaction with evolving technologies and digital health communication is a more recent phenomenon. The expansion of online platforms and social media has significantly altered the drivers and mechanisms of misinformation spread, adding layers of complexity to its study. Additionally, a language bias is present, as only publications in English, Spanish, and French were included. This may have limited the global scope of our findings, potentially excluding valuable perspectives from studies published in other languages, which could provide deeper insights into the spread of health misinformation across diverse cultural and geopolitical contexts. Moreover, for future studies, there is a need to work on the validation of the conceptual framework that has been developed from evidence in other cultural and linguistic contexts.
Despite these limitations, this study also presents several notable strengths. While technological advancements and linguistic constraints may have influenced the scope of our review, our work lays the groundwork for a taxonomy that, although initially developed in the context of the COVID-19 pandemic, can be adapted for future pandemic preparedness. Moreover, the synthesis of findings from this review underscores the urgent need for interoperable definitions of health misinformation for improved measurement of this diffused phenomenon. Given the complexity and evolving nature of misinformation, definitions should be flexible yet aligned with a common framework, facilitating comparisons across different dimensions, including sources, drivers, content, channels, audiences, and impacts. This study highlights that addressing health misinformation and its negative effects requires a multilateral approach involving researchers, governments, social media platforms, health organizations, and communities, as well as the necessity for targeted interventions and educational campaigns tailored to specific demographic groups.
Conclusions
This study highlights the complexity and multifaceted nature of health misinformation during the COVID-19 pandemic, emphasizing the need for an interoperable conceptual framework to facilitate the identification, measurement, and management of this socially harmful phenomenon. By characterizing the main sources, drivers, message types, dissemination channels, audiences, and health-related effects of misinformation, our findings underscore the importance of a shared language across disciplines to improve understanding and intervention strategies. The development of a structured approach to health misinformation contributes not only to advancing research in this field but also to enhancing public health responses in future crises. Furthermore, the impact of misinformation on vulnerable groups, trust in (health) institutions, and overall public health outcomes underscores the necessity of targeted interventions that integrate media literacy, fact-checking mechanisms, and collaborative efforts between researchers, policymakers, health organizations, and digital platforms. As the digital information ecosystem continues to evolve, addressing misinformation requires a dynamic and interdisciplinary approach that accounts for both technological advancements and societal behavioral patterns. Our proposed framework serves as a foundational step toward mitigating the spread of health misinformation and fostering a more resilient and informed society in the face of future health crises.
Acknowledgments
We would like to acknowledge the support of the University Research Institute for Sustainable Social Development and the University of Cádiz. The publication is part of the project NETDYNAMIC (CNS2022-135907), funded by MCIN/AEI/10.13039/501100011033 and by the European Union “Next Generation EU”/PRTR. This study has also been supported by the project DCODES (PID2020-118589RB-I00), granted by the Spanish Ministry of Science and Innovation and financed by MCIN/AEI/10.13039/501100011033.
Conflicts of Interest
None declared.
Search strategy.
DOCX File , 15 KBData collection.
XLSX File (Microsoft Excel File), 187 KBPRISMA checklist.
PDF File (Adobe PDF File), 116 KBReferences
- Allcott H, Gentzkow M, Yu C. Trends in the diffusion of misinformation on social media. Res Polit Internet. 2019;6(2):1-15. [FREE Full text] [CrossRef]
- Suarez-Lledo V, Alvarez-Galvez J. Prevalence of health misinformation on social media: systematic review. J Med Internet Res. 2021;23(1):e17187. [FREE Full text] [CrossRef] [Medline]
- Törnberg P. Echo chambers and viral misinformation: modeling fake news as complex contagion. PLoS One. 2018;13(9):e0203958. [FREE Full text] [CrossRef] [Medline]
- Xian J, Yang D, Pan L, Wang W, Wang Z. Misinformation spreading on correlated multiplex networks. Chaos. 2019;29(11):113123. [CrossRef] [Medline]
- Fernandez-Luque L, Imran M. Humanitarian health computing using artificial intelligence and social media: a narrative literature review. Int J Med Inform. 2018;114:136-142. [CrossRef] [Medline]
- Alvarez-Galvez J, Suarez-Lledo V, Rojas-Garcia A. Determinants of infodemics during disease outbreaks: a systematic review. Front Public Health. 2021;9:603603. [FREE Full text] [CrossRef] [Medline]
- Roozenbeek J, Schneider C, Dryhurst S, Kerr J, Freeman A, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci. 2020;7(10):201199. [FREE Full text] [CrossRef] [Medline]
- Ho K, Chan J, Chiu D. Fake news and misinformation during the pandemic: what we know and what we do not know. IT Prof. 2022;24(2):19-24. [CrossRef]
- Lwin M, Lee S, Panchapakesan C, Tandoc E. Mainstream news media's role in public health communication during crises: assessment of coverage and correction of COVID-19 misinformation. Health Commun. 2023;38(1):160-168. [CrossRef] [Medline]
- Parker L, Byrne J, Goldwater M, Enfield N. Misinformation: an empirical study with scientists and communicators during the COVID-19 pandemic. BMJ Open Sci. 2021;5(1):e100188. [FREE Full text] [CrossRef] [Medline]
- Suarez-Lledo V, Alvarez-Galvez J. Assessing the role of social bots during the COVID-19 pandemic: infodemic, disagreement, and criticism. J Med Internet Res. 2022;24(8):e36085. [FREE Full text] [CrossRef] [Medline]
- Meyrowitsch D, Jensen A, Sørensen JB, Varga TV. AI chatbots and (mis)information in public health: impact on vulnerable communities. Front Public Health. 2023;11:1226776. [FREE Full text] [CrossRef] [Medline]
- Krishna A, Thompson T. Misinformation about health: a review of health communication and misinformation scholarship. Am Behav Sci. 2019;65(2):316-332. [CrossRef]
- Vraga E, Bode L. Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. Polit Commun. 2020;37(1):136-144. [FREE Full text] [CrossRef]
- El Mikati IK, Hoteit R, Harb T, El Zein O, Piggott T, Melki J, et al. Defining misinformation and related terms in health-related literature: scoping review. J Med Internet Res. 2023;25:e45731. [FREE Full text] [CrossRef] [Medline]
- Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019;240:112552. [FREE Full text] [CrossRef] [Medline]
- Camargo CQ, Simon FM. Mis-and disinformation studies are too big to fail: six suggestions for the field's future. Misinformation Review. Cambridge. Harvard Kennedy School; 2022. URL: https://misinforeview.hks.harvard.edu/article/mis-and-disinformation-studies-are-too-big-to-fail-six-suggestions-for-the-fields-future/ [accessed 2025-11-04]
- Larson H. The biggest pandemic risk? Viral misinformation. Nature. 2018;562(7727):309. [CrossRef] [Medline]
- Pickles K, Cvejic E, Nickel B, Copp T, Bonner C, Leask J, et al. COVID-19 Misinformation Trends in Australia: prospective Longitudinal National Survey. J Med Internet Res. 2021;23(1):e23805. [FREE Full text] [CrossRef] [Medline]
- Kim S, Capasso A, Ali S, Headley T, DiClemente R, Tozan Y. What predicts people's belief in COVID-19 misinformation? A retrospective study using a nationwide online survey among adults residing in the United States. BMC Public Health. 2022;22(1):2114. [FREE Full text] [CrossRef] [Medline]
- O'Grady C. Poor trials of health steps are worse than none, scientists say. Science. 2021;374(6572):1180-1181. [CrossRef] [Medline]
- Bagdasarian N, Cross G, Fisher D. Rapid publications risk the integrity of science in the era of COVID-19. BMC Med. 2020;18(1):192. [FREE Full text] [CrossRef] [Medline]
- Schonhaut L, Costa-Roldan I, Oppenheimer I, Pizarro V, Han D, Díaz F. Scientific publication speed and retractions of COVID-19 pandemic original articles. Rev Panam Salud Publica. 2022;46:e25. [FREE Full text] [CrossRef] [Medline]
- Giachanou A, Zhang X, Barrón-Cedeño A, Koltsova O, Rosso P. Online information disorder: fake news, bots and trolls. Int J Data Sci Anal. 2022;13(4):265-269. [FREE Full text] [CrossRef] [Medline]
- Fetzer JH. Disinformation: the use of false information. Minds Mach. 2004;14:231-240. [FREE Full text]
- Hernon P. Disinformation and misinformation through the internet: findings of an exploratory study. Gov Inf Q. 1995;12(2):133-139.
- Baptista JP, Gradim A. Who believes in fake news? Identification of political (a)symmetries. Soc Sci. 2022;11(10):460. [FREE Full text] [CrossRef]
- Stureborg R, Nichols J, Dhingra B, Yang J, Orenstein W, Bednarczyk R, et al. Development and validation of vaxConcerns: a taxonomy of vaccine concerns and misinformation with crowdsource-viability. Vaccine. 2024;42(10):2672-2679. [CrossRef] [Medline]
- Fasce A, Schmid P, Holford D, Bates L, Gurevych I, Lewandowsky S. A taxonomy of anti-vaccination arguments from a systematic literature review and text modelling. Nat Hum Behav. 2023;7(9):1462-1480. [CrossRef] [Medline]
- van der Linden S. Misinformation: susceptibility, spread, and interventions to immunize the public. Nat Med. 2022;28(3):460-467. [CrossRef] [Medline]
- Bastani P, Hakimzadeh SM, Bahrami MA. Designing a conceptual framework for misinformation on social media: a qualitative study on COVID-19. BMC Res Notes. 2021;14(1):408. [FREE Full text] [CrossRef] [Medline]
- Ishizumi A, Kolis J, Abad N, Prybylski D, Brookmeyer K, Voegeli C, et al. Beyond misinformation: developing a public health prevention framework for managing information ecosystems. Lancet Public Health. 2024;9(6):e397-e406. [FREE Full text] [CrossRef] [Medline]
- Page M, McKenzie J, Bossuyt P, Boutron I, Hoffmann T, Mulrow C, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [FREE Full text] [CrossRef] [Medline]
- Serrano-Macías M, Álvarez-Gálvez J, Camacho-García M. QUEST Checklist Internet. Zenodo. 2025. URL: https://doi.org/10.5281/zenodo.15697534 [accessed 2025-10-18]
- Häfliger C, Diviani N, Rubinelli S. Communication inequalities and health disparities among vulnerable groups during the COVID-19 pandemic - a scoping review of qualitative and quantitative evidence. BMC Public Health. 2023;23(1):428. [FREE Full text] [CrossRef] [Medline]
- Marques MJ, Gama A, Mendonça J, Fernandes A, Osborne R, Dias S. Assessing health literacy among migrants and associated socioeconomic factors. Eur J Public Health Internet. 2021;31(Supplement_3):ckab165.410. [CrossRef]
- Roozenbeek J, Culloty E, Suiter J. Countering misinformation. Eur Psychol. 2023;28(3):189-205. [FREE Full text] [CrossRef]
- Casino G. [Communication in times of pandemic: information, disinformation, and provisional lessons from the coronavirus crisis]. Gac Sanit. 2022;36 Suppl 1:S97-S104. [FREE Full text] [CrossRef] [Medline]
- Ahmad T, Aliaga-Lazarte EAA, Mirjalili S. A systematic literature review on fake news in the COVID-19 pandemic: can AI propose a solution? Appl Sci. 2022;12(24):12727. [FREE Full text] [CrossRef]
- Chavda VP, Sonak SS, Munshi NK, Dhamade PN. Pseudoscience and fraudulent products for COVID-19 management. Environ Sci Pollut Res Int. 2022;29(42):62887-62912. [FREE Full text] [CrossRef] [Medline]
- Mukhtar S. Psychology and politics of COVID-19 misinfodemics: why and how do people believe in misinfodemics? Int Sociol. 2020;36(1):111-123. [FREE Full text] [CrossRef]
- Ying W, Cheng C. Public emotional and coping responses to the COVID-19 infodemic: a review and recommendations. Front Psychiatry. 2021;12:755938. [FREE Full text] [CrossRef] [Medline]
- Sarkar MA, Ozair A, Singh KK, Subash N, Bardhan M, Khulbe Y. SARS-CoV-2 vaccination in India: considerations of hesitancy and bioethics in global health. Ann Glob Health. 2021;87(1):124. [FREE Full text] [CrossRef] [Medline]
- Hakim MS. SARS-CoV-2, COVID-19, and the debunking of conspiracy theories. Rev Med Virol. 2021;31(6):e2222. [FREE Full text] [CrossRef] [Medline]
- Cascini F, Pantovic A, Al-Ajlouni Y, Failla G, Puleo V, Melnyk A, et al. Social media and attitudes towards a COVID-19 vaccination: a systematic review of the literature. eClinicalMedicine. 2022;48:101454. [FREE Full text] [CrossRef] [Medline]
- Ries M. The COVID-19 infodemic: mechanism, impact, and counter-measures—a review of reviews. Sustainability. 2022;14(5):2605. [FREE Full text] [CrossRef]
- Dow BJ, Johnson AL, Wang CS, Whitson J, Menon T. The COVID-19 pandemic and the search for structure: social media and conspiracy theories. Soc Personal Psychol Compass. 2021;15(9):e12636. [FREE Full text] [CrossRef] [Medline]
- Giordani RCF, Donasolo JPG, Ames VDB, Giordani R. The science between the infodemic and other post-truth narratives: challenges during the pandemic. Cien Saude Colet. 2021;26(7):2863-2872. [FREE Full text] [CrossRef] [Medline]
- Perveen S, Akram M, Nasar A, Arshad-Ayaz A, Naseem A. Vaccination-hesitancy and vaccination-inequality as challenges in Pakistan's COVID-19 response. J Community Psychol. 2022;50(2):666-683. [FREE Full text] [CrossRef] [Medline]
- Shobowale O. A systematic review of the spread of information during pandemics: a case of the 2020 COVID-19 virus. J African Media Stud. 2021;13:221-234. [FREE Full text] [CrossRef]
- Tharwani ZH, Kumar P, Marfani WB, Shaeen S, Adnan A, Mohanan P, et al. What has been learned about COVID-19 vaccine hesitancy in Pakistan: insights from a narrative review. Health Sci Rep. 2022;5(6):e940. [FREE Full text] [CrossRef] [Medline]
- van Mulukom V, Pummerer L, Alper S, Bai H, Čavojová V, Farias J, et al. Antecedents and consequences of COVID-19 conspiracy beliefs: a systematic review. Soc Sci Med. 2022;301:114912. [FREE Full text] [CrossRef] [Medline]
- La Bella E, Allen C, Lirussi F. Communication vs evidence: what hinders the outreach of science during an infodemic? A narrative review. Integr Med Res. 2021;10(4):100731. [FREE Full text] [CrossRef] [Medline]
- Hansson S, Orru K, Torpan S, Bäck A, Kazemekaityte A, Meyer S, et al. COVID-19 information disorder: six types of harmful information during the pandemic in Europe. Journal of Risk Research. 2021;24(3-4):380-393. [FREE Full text] [CrossRef]
- Shan Y, Ji M, Xie W, Zhang X, Ng Chok H, Li R, et al. COVID-19-related health inequalities induced by the use of social media: systematic review. JMIR Infodemiology. 2022;2(2):e38453. [FREE Full text] [CrossRef] [Medline]
- Gabarron E, Oyeyemi S, Wynn R. COVID-19-related misinformation on social media: a systematic review. Bull World Health Organ. 2021;99(6):455-463A. [FREE Full text] [CrossRef] [Medline]
- Riaz W, Lee YS. Examining the association between social media misinformation and COVID-19 vaccine hesitancy: a systematic review. KPAQ. 2021;33(2):437-457. [FREE Full text] [CrossRef]
- Clemente-Suárez VJ, Navarro-Jiménez E, Simón-Sanjurjo JA, Beltran-Velasco A, Laborde-Cárdenas CC, Benitez-Agudelo J, et al. Mis-dis information in COVID-19 health crisis: a narrative review. Int J Environ Res Public Health. 2022;19(9):5321. [FREE Full text] [CrossRef] [Medline]
- Ullah I, Khan KS, Tahir MJ, Ahmed A, Harapan H. Myths and conspiracy theories on vaccines and COVID-19: potential effect on global vaccine refusals. Vacunas. 2021;22(2):93-97. [FREE Full text] [CrossRef] [Medline]
- Kużelewska E, Tomaszuk M. Rise of conspiracy theories in the pandemic times. Int J Semiot Law. 2022;35(6):2373-2389. [FREE Full text] [CrossRef] [Medline]
- Kemei J, Alaazi D, Tulli M, Kennedy M, Tunde-Byass M, Bailey P, et al. A scoping review of COVID-19 online mis/disinformation in Black communities. J Glob Health. 2022;12:05026. [CrossRef] [Medline]
- Biswas MR, Alzubaidi MS, Shah U, Abd-Alrazaq A, Shah Z. A scoping review to find out worldwide COVID-19 vaccine hesitancy and its underlying determinants. Vaccines (Basel). 2021;9(11):1243. [FREE Full text] [CrossRef] [Medline]
- Lewandowsky S, Holford D, Schmid P. Public policy and conspiracies: the case of mandates. Curr Opin Psychol. 2022;47:101427. [FREE Full text] [CrossRef] [Medline]
- Nouri SN, Cohen Y, Madhavan MV, Slomka PJ, Iskandrian AE, Einstein AJ. Preprint manuscripts and servers in the era of coronavirus disease 2019. J Eval Clin Pract. 2021;27(1):16-21. [CrossRef] [Medline]
- Vlasschaert C, Topf JM, Hiremath S. Proliferation of papers and preprints during the coronavirus disease 2019 pandemic: progress or problems with peer review? Adv Chronic Kidney Dis. 2020;27(5):418-426. [FREE Full text] [CrossRef] [Medline]
- Pian W, Chi J, Ma F. The causes, impacts and countermeasures of COVID-19 "Infodemic": a systematic review using narrative synthesis. Inf Process Manag. 2021;58(6):102713. [FREE Full text] [CrossRef] [Medline]
- de Rosa AS, Mannarini T. The “Invisible Other”: social representations of COVID-19 pandemic in media and institutional discourse. Pap Soc Represent Internet. 2020;29(2):5.1-5.35. [FREE Full text]
- Meese J, Frith J, Wilken R. COVID-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020;177(1):30-46. [CrossRef]
- Abidin C, Lee J, Barbetta T, Miao W. Influencers and COVID-19: reviewing key issues in press coverage across Australia, China, Japan, and South Korea. Media Int Aust. 2020;178(1):114-135. [FREE Full text] [CrossRef]
- Romate J, Rajkumar E, Gopi A, Abraham J, Rages J, Lakshmi R, et al. What contributes to COVID-19 vaccine hesitancy? A systematic review of the psychological factors associated with COVID-19 vaccine hesitancy. Vaccines (Basel). 2022;10(11):1777. [FREE Full text] [CrossRef] [Medline]
- Ali S. Hu Arenas. 2022;5(2):337-352. [CrossRef]
- Aiyer I, Shaik L, Kashyap R, Surani S. COVID-19 misinformation: a potent co-factor in the COVID-19 pandemic. Cureus. 2022;14(10):e30026. [FREE Full text] [CrossRef] [Medline]
- Arigbede OM, Aladeniyi OB, Buxbaum SG, Arigbede O. The use of five public health themes in understanding the roles of misinformation and education toward disparities in racial and ethnic distribution of COVID-19. Cureus. 2022;14(10):e30008. [FREE Full text] [CrossRef] [Medline]
- Tsao SF, Chen H, Tisseverasinghe T, Yang Y, Li L, Butt Z. What social media told us in the time of COVID-19: a scoping review. Lancet Digit Health. 2021;3(3):e175-e194. [FREE Full text] [CrossRef] [Medline]
- Balakrishnan V, Rahman LHA, Tan JK, Lee Y. COVID-19 fake news among the general population: motives, sociodemographic, attitude/behavior and impacts – a systematic review. Online Inf Rev. 2023;47:944-973. [FREE Full text] [CrossRef]
- Balakrishnan V, Ng WZ, Soo MC, Han G, Lee C. Infodemic and fake news - a comprehensive overview of its global magnitude during the COVID-19 pandemic in 2021: a scoping review. Int J Disaster Risk Reduct. 2022;78:103144. [FREE Full text] [CrossRef] [Medline]
- Sharma L, Joshi K, Acharya T, Dwivedi M, Sethy G. Infodemics during era of COVID-19 pandemic: a review of literature. J Fam Med Prim Care. 2022;11(8):4236-4239. [FREE Full text] [CrossRef] [Medline]
- Czudy Z, Matuszczak M, Donderska M, Haczy?ski J. The role of (Dis)information in society during the COVID-19 pandemic. Probl Zarzadzania. 2021;19(92):49-63. [FREE Full text] [CrossRef]
- Ali SM, Hashmi A, Hussain T. Causes and treatment of COVID-19: myths vs facts. Pak J Pharm Sci. 2020;33(4):1731-1734. [Medline]
- Joseph AM, Fernandez V, Kritzman S, Eaddy I, Cook O, Lambros S, et al. COVID-19 misinformation on social media: a scoping review. Cureus. 2022;14(4):e24601. [FREE Full text] [CrossRef] [Medline]
- Goldsmith LP, Rowland-Pomp M, Hanson K, Deal A, Crawshaw A, Hayward S, et al. Use of social media platforms by migrant and ethnic minority populations during the COVID-19 pandemic: a systematic review. BMJ Open. 2022;12(11):e061896. [FREE Full text] [CrossRef] [Medline]
- Hussain B, Latif A, Timmons S, Nkhoma K, Nellums L. Overcoming COVID-19 vaccine hesitancy among ethnic minorities: a systematic review of UK studies. Vaccine. 2022;40(25):3413-3432. [FREE Full text] [CrossRef] [Medline]
- Roy D, Biswas M, Islam E, Azam M. Potential factors influencing COVID-19 vaccine acceptance and hesitancy: a systematic review. PLOS One. 2022;17(3):e0265496. [FREE Full text] [CrossRef] [Medline]
- Tsamakis K, Tsiptsios D, Stubbs B, Ma R, Romano E, Mueller C, et al. Summarising data and factors associated with COVID-19 related conspiracy theories in the first year of the pandemic: a systematic review and narrative synthesis. BMC Psychol. 2022;10(1):244. [FREE Full text] [CrossRef] [Medline]
- Ripp T, Röer JP. Systematic review on the association of COVID-19-related conspiracy belief with infection-preventive behavior and vaccination willingness. BMC Psychol. 2022;10(1):66. [FREE Full text] [CrossRef] [Medline]
- Rocha YM, de Moura GA, Desidério GA, de Oliveira CH, Lourenço FD, de Figueiredo Nicolete LD. The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. J Public Health. 2023;31:1007-1016. [FREE Full text] [CrossRef] [Medline]
- Caceres MMF, Sosa JP, Lawrence JA, Sestacovschi C, Tidd-Johnson A, Rasool MHU, et al. The impact of misinformation on the COVID-19 pandemic. AIMS Public Health. 2022;9(2):262-277. [FREE Full text] [CrossRef] [Medline]
- do Nascimento IJB, Pizarro AB, Almeida JM, Azzopardi-Muscat N, Gonçalves MA, Björklund M, et al. Infodemics and health misinformation: a systematic review of reviews. Bull World Health Organ. 2022;100(9):544-561. [FREE Full text] [CrossRef] [Medline]
- de Albuquerque Veloso Machado M, Roberts B, Wong BLH, van Kessel R, Mossialos E. The relationship between the COVID-19 pandemic and vaccine hesitancy: a scoping review of literature until August 2021. Front Public Health. 2021;9:747787. [FREE Full text] [CrossRef] [Medline]
- Jungkunz S. Political polarization during the COVID-19 pandemic. Front Political Sci. 2021;3:622512. [CrossRef]
- Paakkari L, Okan O. COVID-19: health literacy is an underestimated problem. Lancet Public Health. 2020;5(5):e249-e250. [FREE Full text] [CrossRef] [Medline]
- Himelein-Wachowiak M, Giorgi S, Devoto A, Rahman M, Ungar L, Schwartz H, et al. Bots and misinformation spread on social media: implications for COVID-19. J Med Internet Res. 2021;23(5):e26933. [FREE Full text] [CrossRef] [Medline]
- Sule S, DaCosta MC, DeCou E, Gilson C, Wallace K, Goff S. Communication of COVID-19 misinformation on social media by physicians in the US. JAMA Netw Open. 2023;6(8):e2328928. [FREE Full text] [CrossRef] [Medline]
- Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. Soc Netw Anal Min. 2023;13(1):30. [FREE Full text] [CrossRef] [Medline]
- Kaper MS, de Winter AF, Bevilacqua R, Giammarchi C, McCusker A, Sixsmith J, et al. Positive outcomes of a comprehensive health literacy communication training for health professionals in three european countries: a multi-centre pre-post intervention study. Int J Environ Res Public Health. 2019;16(20):3923. [FREE Full text] [CrossRef] [Medline]
- Enders AM, Uscinski JE, Klofstad C, Stoler J. The different forms of COVID-19 misinformation and their consequences. Harvard Kennedy Sch Misinformation Rev. 2022. URL: https://misinforeview.hks.harvard.edu/article/the-different-forms-of-covid-19-misinformation-and-their-consequences/ [accessed 2025-10-18]
- Caled D, Silva MJ. Digital media and misinformation: an outlook on multidisciplinary strategies against manipulation. J Comput Soc Sci. 2022;5(1):123-159. [CrossRef] [Medline]
- Chou WYS, Oh A, Klein WMP. Addressing health-related misinformation on social media. JAMA. 2018;320(23):2417-2418. [CrossRef] [Medline]
- Sohn D. Spiral of silence in the social media era: a simulation approach to the interplay between social networks and mass media. Commun Res. 2019;49(1):139-166. [FREE Full text] [CrossRef]
- Southwell BG, Brennen JSB, Paquin R, Boudewyns V, Zeng J. Defining and measuring scientific misinformation. Ann Am Acad Political Soc Sci. 2022;700(1):98-111. [FREE Full text] [CrossRef]
- National Academies of Sciences Engineering and Medicine. Viswanath K, Taylor TE, Rhodes HG, editors. Understanding and Addressing Misinformation About Science. Washington, D.C. National Academies Press; 2024.
Abbreviations
| AI: artificial intelligence |
| PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
| QUEST: Quality Evaluation of Scientific Texts |
Edited by AS Bhagavathula; submitted 29.May.2024; peer-reviewed by Y Zhang, N Ismail, B Southwell; comments to author 04.Jan.2025; revised version received 26.Feb.2025; accepted 14.Jul.2025; published 21.Nov.2025.
Copyright©Javier Alvarez-Galvez, Jesus Carretero-Bravo, Carolina Lagares-Franco, Begoña Ramos-Fiol, Esther Ortega-Martin. Originally published in JMIR Public Health and Surveillance (https://publichealth.jmir.org), 21.Nov.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited. The complete bibliographic information, a link to the original publication on https://publichealth.jmir.org, as well as this copyright and license information must be included.

