JMIR Publications


We are scheduled to perform a server upgrade on Thursday, November 30, 2017 between 4 and 6 PM Eastern Time.

Please refrain from submitting support requests related to server downtime during this window.

JMIR Public Health and Surveillance

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 29.05.15 in Vol 1, No 1 (2015): Jan-Jun

This paper is in the following e-collection/theme issue:

    Short Paper

    Using Social Media to Perform Local Influenza Surveillance in an Inner-City Hospital: A Retrospective Observational Study

    1Department of Engineering Management and Systems Engineering, The George Washington University, Washington, DC, United States

    2Human Language Technology Center of Excellence, Johns Hopkins University, Baltimore, MD, United States

    3Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States

    4Department of Emergency Medicine, Johns Hopkins University, Baltimore, MD, United States

    Corresponding Author:

    David Andre Broniatowski, PhD

    Department of Engineering Management and Systems Engineering

    The George Washington University

    Science and Engineering Hall

    800 22nd Street NW, #2700

    Washington, DC, 20052

    United States

    Phone: 1 2029943751

    Fax:1 2029943751

    Email:


    ABSTRACT

    Background: Public health officials and policy makers in the United States expend significant resources at the national, state, county, and city levels to measure the rate of influenza infection. These individuals rely on influenza infection rate information to make important decisions during the course of an influenza season driving vaccination campaigns, clinical guidelines, and medical staffing. Web and social media data sources have emerged as attractive alternatives to supplement existing practices. While traditional surveillance methods take 1-2 weeks, and significant labor, to produce an infection estimate in each locale, web and social media data are available in near real-time for a broad range of locations.

    Objective: The objective of this study was to analyze the efficacy of flu surveillance from combining data from the websites Google Flu Trends and HealthTweets at the local level. We considered both emergency department influenza-like illness cases and laboratory-confirmed influenza cases for a single hospital in the City of Baltimore.

    Methods: This was a retrospective observational study comparing estimates of influenza activity of Google Flu Trends and Twitter to actual counts of individuals with laboratory-confirmed influenza, and counts of individuals presenting to the emergency department with influenza-like illness cases. Data were collected from November 20, 2011 through March 16, 2014. Each parameter was evaluated on the municipal, regional, and national scale. We examined the utility of social media data for tracking actual influenza infection at the municipal, state, and national levels. Specifically, we compared the efficacy of Twitter and Google Flu Trends data.

    Results: We found that municipal-level Twitter data was more effective than regional and national data when tracking actual influenza infection rates in a Baltimore inner-city hospital. When combined, national-level Twitter and Google Flu Trends data outperformed each data source individually. In addition, influenza-like illness data at all levels of geographic granularity were best predicted by national Google Flu Trends data.

    Conclusions: In order to overcome sensitivity to transient events, such as the news cycle, the best-fitting Google Flu Trends model relies on a 4-week moving average, suggesting that it may also be sacrificing sensitivity to transient fluctuations in influenza infection to achieve predictive power. Implications for influenza forecasting are discussed in this report.

    JMIR Public Health Surveill 2015;1(1):e5

    doi:10.2196/publichealth.4472

    KEYWORDS



    Introduction

    Public health officials and policy makers rely on influenza infection rate information to make important decisions during the course of an influenza season. Whereas influenza surveillance has traditionally been conducted using laboratory data, hospitalizations, and physician visits for influenza-like illness (ILI), web and social media data sources have emerged as attractive alternatives to supplement existing practices. While traditional surveillance methods take 1-2 weeks, and significant labor, to produce an infection estimate in each locale, web and social media data are available in near real-time for a broad range of locations. Studies have demonstrated that web queries [1-3], Twitter messages [4-12], and other sources (eg, Wikipedia [13], mobile app reporting [14]) may be productively mined for influenza surveillance data. New resources like Google Flu Trends [1], HealthTweets [15,16](Figure 1), and Flu Near You [14] deliver near-real time estimates of infection rates.

    However, few have examined the efficacy of local surveillance [12,17,18]. In this study, we analyzed the efficacy of local flu surveillance from Google Flu Trends and HealthTweets. Whereas previous studies that considered either Google or Twitter in isolation, we evaluated multiple trends available from both. Furthermore, instead of restricting our study to hospitals designated as ILI sentinels, or emergency department ILI rates, we considered both emergency department ILI and laboratory-confirmed influenza cases for a single hospital in the city of Baltimore. This enabled us to evaluate the impact on specific care centers when making influenza response decisions, such as staffing and resource allocation.

    Figure 1. Screenshot of HealthTweets.
    View this figure

    Methods

    Study Population and Setting

    This was a retrospective observational study comparing estimates of influenza activity from Google flu trends and Twitter to actual counts of individuals with laboratory-confirmed influenza, and counts of individuals presenting to the emergency department with ILI. Each parameter was evaluated on the municipal, regional, and national scale.

    Data Collection and Methods of Measurement

    Data were collected from November 20, 2011 through March 16, 2014. All measurements were recorded weekly to allow for direct comparison between data sources. Following the Centers for Disease Control (CDC) Convention, each week summed the data points from Sunday through the following Saturday. The number of municipal- (city) level subjects was estimated by evaluating the number of patients presenting to an urban academic emergency department in Baltimore, Maryland with an annual volume of over 60,000 adult and 24,000 pediatric visits. The number of confirmed influenza cases was determined by summing the number of emergency department visits with laboratory-confirmed influenza that occurred during each week. Similarly, the number of patients with ILI was determined by summing the number of emergency department patients who reported fever with cough or sore throat each week. Regional data were collected via the CDC surveillance reports for health and Human Services (HHS) Region 3, including both the percentage of patients reporting ILI and the percentage of tests positive for influenza. National data were collected from the CDC surveillance report of the nationwide percentage of patients reporting ILI and the total percentage of patients testing positive for influenza.

    Google Flu Trends data for the United States, the state of Maryland, and the city of Baltimore were downloaded directly from the Google Flu Trends website [19]. Twitter data for the same three locations was obtained from the HealthTweets website [15], an online platform for public health surveillance aimed at sharing the latest research results on Twitter data with the scientific community and public officials. The underlying data were generated using a sequence of supervised machine-learning algorithms [10,12], namely logistic regression classifiers, the first of which identified tweets that were relevant to health. Next, tweets that were about influenza were isolated. The final classifier separated tweets that were about reported influenza infection from those that only reported awareness of the flu. The tweets indicating influenza infection constituted our dataset. Message locations were identified using Carmen [20], a software package that infers tweet locations using Global Positioning System (GPS) coordinates and self-reported locations from the free text of the user biographic profiles.

    Statistical Analysis

    Data were analyzed by evaluating weekly trends over time using the Box-Jenkins procedure [21] applied to each data source (influenza tests at our medical center, ILI at our medical center, % reported flu cases in HHS region 3 and the USA, and % reported ILI in HHS region 3 and the USA) in order to control for autocorrelation in the corresponding time series. We next fit an autoregressive integrated moving average model with exogenous covariates (ARIMAX) to each data time series, Xt, where p, d, and q, are the respective autoregressive, differencing, and moving average orders of the model (Figure 2 , part a). The φiand θiare the autoregressive and moving average parameters, respectively, εtis a normally distributed error term with a mean of 0, L is a lag operator defined as in Figure 2 , part b, and mtis defined as in Figure 2 , part c, where ytis a series of predictors (eg, Twitter and/or Google Flu Trends data), the ηiare a series of predictor weights, and b is the total number of predictor time series.

    We chose the autoregressive, differencing, and moving average terms of each model that minimized each its Aikake Information Criterion (AIC) subject to the constraint that each model used the same degree of differencing for each data source. This constraint was imposed to enable comparison across social media predictors (ie, Twitter, Google Flu Trends, or both). All statistics were conducted using the R Project for Statistical Computing, version 3.0.2 (The R Foundation for Statistical Computing). Specifically, we used the "arima()” function in the forecast package [22]. Parameter selection was informed by the “auto.arima()” function, using the Hyndman and Khandakar algorithm [23]. Deviations from the algorithm’s output were then examined by hand and parameters that deviated from algorithm output were chosen if they minimized AIC.

    Figure 2. Equations defining the ARIMAX model.
    View this figure

    Results

    Table 1 summarizes the results of each ARIMA model incorporating Twitter and Google Flu Trends data. Our results show that Baltimore-area Twitter data provided a better estimate of actual influenza cases reported in the Baltimore metropolitan area when compared to state- and national-level Twitter data (see Figure 3). Furthermore, a combination of Twitter and Google Flu Trends data sources outperformed either Twitter or Google Flu Trends individually when predicting actual influenza outbreaks at municipal and regional levels.

    Table 1. Log-likelihood (AICa) for each surveillance method.
    View this table
    Figure 3. Plot of weekly confirmed influenza cases (right axis) as compared to standardized Baltimore social media data (left axis).
    View this figure

    When directly comparing models that rely only on one data source (ie, Twitter or Google Flu Trends but not both), we found that the best-fitting Twitter models were simple whereas the best-fitting Google Flu Trends models generally required more parameters. For example, at the municipal level, the best-fitting Twitter model did not require any autoregressive or moving average terms, whereas the best-fitting Google Flu Trends model required a 4-week moving average of Google Flu Trends data and an autoregressive term. In general, these more complex Google Flu Trends models outperformed the best-fitting Twitter models. Although these Google Flu Trends models were significantly more complex (ie, one must fit more parameters), they had a lower AIC, indicating that they were also more informative.


    Discussion

    Principal Findings

    Consistent with prior work [18], we found that national-level Google Flu Trends data may be used to track actual influenza cases in the Baltimore area. The fact that a combination of Twitter and Google Flu Trends data at the national (US) level outperformed all other data sources for local and regional confirmed influenza cases indicates that these data sources are not redundant and that Twitter data are contributing information useful to influenza surveillance that are not captured by the corresponding Google Flu Trends data.

    Comparison With Prior Work

    Whereas prior work using Google Flu Trends data has largely focused on US ILI data, we extended this finding to multiple levels of geographic granularity by examining social media surveillance at the regional and city levels as well. We found that US Google Flu Trends data best explained ILI rates at all levels (including the municipal level, see Figure 4). This contrasts with prior research, which found that Google Flu Trends data conflated signals of influenza awareness (eg, media attention) with signals of actual infection - overestimating the flu season’s peak prevalence. In addition, this prior work found that there was insufficient control for temporal autocorrelation and a lack of analysis of Google Flu Trends data at local, rather than national, levels [24].

    Figure 4. Plot of weekly influenza-like illness cases (right axis) as compared to standardized US social media data (left axis).
    View this figure

    In this study, we controlled for autocorrelation and exogenous temporal factors using an ARIMAX model. The improved performance of this model might be an indication that the 4-week moving average terms are smoothing out fluctuations due to the news cycle. Nevertheless, because Google Flu Trends data do not explicitly differentiate between signals of influenza awareness and actual infection, this relatively complicated model may buy accuracy at the cost of sensitivity to transient phenomena. Thus, temporary spikes in media coverage are smoothed out, but so would temporary spikes in influenza infection.

    Elsewhere, we have shown that our Twitter data overcome the limitations identified in prior Google Flu Trends studies by filtering out signals of influenza awareness from signals of actual infection and enabling analysis at multiple levels of geographic granularity [12,25]. Furthermore, the fact that the Twitter model is more lightweight means that it is more able to correctly track transient increases in infection when they occur [12]. Finally, municipal-level Twitter data provided a better account of actual influenza cases in Baltimore than did state- or national- level data. This finding is consistent with prior work [12] showing that local Twitter data does contribute information that is useful for municipal surveillance. In contrast, state- and local-level Google Flu Trends data did not improve surveillance when compared to national GFT data.

    Limitations

    One limitation of our approach is that it only relies upon one municipality. Furthermore, our analysis only examined three seasons of influenza data, one of which (the 2012-2013 season) is known to have been anomalous. Future work should therefore focus on incorporating data from multiple influenza seasons.

    Conclusions

    Overall, our results motivate the need for future work examining how social media may be used to track measures relevant to influenza surveillance in multiple different locations and seasons.

    Acknowledgments

    DA Broniatowski and M Dredze were supported in part by the National Institutes of Health under award number 1R01GM114771-01. MJ Paul was supported by a PhD fellowship from Microsoft Research.

    Conflicts of Interest

    M Dredze and MJ Paul serve on the advisory board of SickWeather. There are no other conflicts of interest.

    References

    1. Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L. Detecting influenza epidemics using search engine query data. Nature 2009 Feb 19;457(7232):1012-1014. [CrossRef] [Medline]
    2. Polgreen PM, Chen Y, Pennock DM, Nelson FD. Using internet searches for influenza surveillance. Clin Infect Dis 2008 Dec 1;47(11):1443-1448 [FREE Full text] [CrossRef] [Medline]
    3. Yuan Q, Nsoesie EO, Lv B, Peng G, Chunara R, Brownstein JS. Monitoring influenza epidemics in china with search query from baidu. PLoS One 2013;8(5):e64323 [FREE Full text] [CrossRef] [Medline]
    4. Culotta A. Towards detecting influenza epidemics by analyzing Twitter messages. 2010 Presented at: Proc First Workshop on Social Media Analytics : 115-122; 2010; New York, NY, USA. [CrossRef]
    5. Paul MJ, Dredze M. You are what you Tweet: Analyzing Twitter for public health. 2011 Presented at: ICWSM; 2011; Barcelona, Spain p. 265-272.
    6. Lampos V, Cristianini N. Nowcasting events from the social web with statistical learning. ACM Transactions on Intelligent Systems and Technology (TIST) 2012;3(4):72. [CrossRef]
    7. Dredze M. How Social Media Will Change Public Health. IEEE Intell. Syst 2012 Jul;27(4):81-84. [CrossRef]
    8. Chew C, Eysenbach G. Pandemics in the age of Twitter: content analysis of Tweets during the 2009 H1N1 outbreak. PLoS One 2010;5(11):e14118 [FREE Full text] [CrossRef] [Medline]
    9. Salathé M, Khandelwal S. Assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control. PLoS computational biology 2011;7(10). [CrossRef]
    10. Lamb A, Paul MJ, Dredze M. Separating Fact from Fear: Tracking Flu Infections on Twitter. In: HLT-NAACL. 2013 Presented at: HLT-NAACL; 2013; Atlanta, Georgia, USA p. 789-795.
    11. Gesualdo F, Stilo G, Agricola E, Gonfiantini MV, Pandolfi E, Velardi P, et al. Influenza-like illness surveillance on Twitter through automated learning of naïve language. PLoS One 2013;8(12):e82489 [FREE Full text] [CrossRef] [Medline]
    12. Broniatowski DA, Paul MJ, Dredze M. National and local influenza surveillance through Twitter: an analysis of the 2012-2013 influenza epidemic. PLoS One 2013;8(12):e83672 [FREE Full text] [CrossRef] [Medline]
    13. McIver DJ, Brownstein JS. Wikipedia usage estimates prevalence of influenza-like illness in the United States in near real-time. PLoS computational biology 2014;10(4). [CrossRef]
    14. Chunara R, Aman S, Smolinski M, Brownstein JS. Flu near you: an online self-reported influenza surveillance system in the USA. Online Journal of Public Health Informatics 2013;5(1). [Medline]
    15. Dredze M, Cheng R, Paul MJ, Broniatowski DA. HealthTweets. org: A Platform for Public Health Surveillance using Twitter. In: Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence. 2014 Presented at: AAAI Conference on Artificial Intelligence; 2014; Quebec City, Quebec, Canada.
    16. HealthTweets.org.   URL: http://www.healthtweets.org/accounts/login/?next=/ [accessed 2015-05-22] [WebCite Cache]
    17. Nagel AC, Tsou MH, Spitzberg BH, An L, Gawron JM, Gupta DL, et al. The complex relationship of realspace events and messages in cyberspace: Case study of influenza and pertussis using tweets 2013. JMIR 2013;15(10). [CrossRef] [Medline]
    18. Dugas AF, Jalalpour M, Gel Y, Levin S, Torcaso F, Igusa T, et al. Influenza forecasting with Google Flu Trends. PLoS One 2013;8(2):e56176 [FREE Full text] [CrossRef] [Medline]
    19. Google Flu Trends.   URL: https://www.google.org/flutrends/us/#US [WebCite Cache]
    20. Dredze M, Paul MJ, Bergsma S, Tran H. Carmen: A twitter geolocation system with applications to public health. 2013 Jun Presented at: AAAI Workshop on Expanding the Boundaries of Health Informatics Using AI (HIAI); 2013; Bellevue, WA p. 20-24.
    21. Box GEP, Jenkins GM, Reinsel GC. Time series analysis: forecasting and control. Hoboken, NJ: John Wiley; 2008.
    22. Hyndman RJ, Khandakar Y. Automatic Time Series Forecasting: The forecast Package for R. Journal of Statistical Software 2008;27(3) [FREE Full text]
    23. Hyndman RJ, Khandakar Y. No 6/07 2007. Monash University, Department of Econometrics and Business Statistics. 2007. Automatic time series for forecasting: The forecast package for R   URL: http://webdoc.sub.gwdg.de/ebook/serien/e/monash_univ/wp6-07.pdf [accessed 2015-05-19] [WebCite Cache]
    24. Lazer D, Kennedy R, King G, Vespignani A. The Parable of Google Flu: Traps in Big Data Analysis. Science 2014 Mar. [CrossRef]
    25. Broniatowski DA, Paul MJ, Dredze M. Twitter: big data opportunities. Science 2014 Jul 11;345(6193):148. [CrossRef] [Medline]


    Abbreviations

    AIC: Aikake information criterion
    ARIMA: Autoregressive integrated moving average
    CDC: Centers for Disease Control
    HHS: Health and Human Systems
    ILI: Influenza-like illness


    Edited by G Eysenbach; submitted 25.03.15; peer-reviewed by D Mciver; comments to author 29.04.15; revised version received 04.05.15; accepted 05.05.15; published 29.05.15

    ©David Andre Broniatowski, Mark Dredze, Michael J Paul, Andrea Dugas. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 29.05.2015.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited. The complete bibliographic information, a link to the original publication on http://publichealth.jmir.org, as well as this copyright and license information must be included.