Skip to main content

Baseline hospital performance and the impact of medical emergency teams: Modelling vs. conventional subgroup analysis

Abstract

Background

To compare two approaches to the statistical analysis of the relationship between the baseline incidence of adverse events and the effect of medical emergency teams (METs).

Methods

Using data from a cluster randomized controlled trial (the MERIT study), we analysed the relationship between the baseline incidence of adverse events and its change from baseline to the MET activation phase using quadratic modelling techniques. We compared the findings with those obtained with conventional subgroup analysis.

Results

Using linear and quadratic modelling techniques, we found that each unit increase in the baseline incidence of adverse events in MET hospitals was associated with a 0.59 unit subsequent reduction in adverse events (95%CI: 0.33 to 0.86) after MET implementation and activation. This applied to cardiac arrests (0.74; 95%CI: 0.52 to 0.95), unplanned ICU admissions (0.56; 95%CI: 0.26 to 0.85) and unexpected deaths (0.68; 95%CI: 0.45 to 0.90). Control hospitals showed a similar reduction only for cardiac arrests (0.95; 95%CI: 0.56 to 1.32). Comparison using conventional subgroup analysis, on the other hand, detected no significant difference between MET and control hospitals.

Conclusions

Our study showed that, in the MERIT study, when there was dependence of treatment effect on baseline performance, an approach based on regression modelling helped illustrate the nature and magnitude of such dependence while sub-group analysis did not. The ability to assess the nature and magnitude of such dependence may have policy implications. Regression technique may thus prove useful in analysing data when there is a conditional treatment effect.

Peer Review reports

Introduction

Following the landmark reports from the Institute of Medicine (IOM), numerous programs designed to improve patient safety have been introduced [1–6]. Rigorous evaluation of such programs provides a considerable challenge. Organisational theory recognizes that the effectiveness of health care interventions is likely to be system dependent. Therefore, understanding system specific organizational characteristics might be important in evaluating the effectiveness of interventions [7–10].

The medical emergency team (MET) was first introduced in Australia in the early 1990s. Its main aim is to reduce unexpected deaths, cardiac arrests and unanticipated Intensive Care Unit (ICU) admissions [11–14]. The MET and similar systems have now been widely adopted [15–17]. Single centre studies using historical controls have supported the effectiveness of the MET [18–26].

However, statistical analysis of data from the Medical Early Response & Intervention Therapy (MERIT) study, a 23-hospital cluster randomised controlled trial of the MET system, failed to show a difference in the aggregate incidence of unexpected cardiac arrests, unexpected deaths and unanticipated ICU admissions between MET and control hospitals [27]. The primary MERIT statistical analysis protocol was based on main effect analysis. Although it used the baseline incidence of adverse events as a covariate, it did not test for an interaction effect between treatment allocation and the baseline incidence of the study outcomes.

The baseline incidence of a specified outcome is an important hospital performance characteristic that may predict the magnitude of improvement in response to an intervention [28]. It varied greatly in MERIT. This raises questions regarding what might be the correct statistical approach to the analysis of the MERIT study results. Unfortunately, there is no established and widely accepted statistical approach that can be applied under these circumstances. Yet the choice of statistical method might well affect the interpretation of the study findings. In this setting, a comparison of statistical approaches might illustrate the impact of the choice of statistical technique on data interpretation and have useful implications for the analysis of similar studies in the future.

The aim of this study was to compare a regression based approach with subgroup analysis in terms of its results and empirical interpretation when the baseline outcome proved to be a continuous variable and treatment effect modifier. Accordingly, we developed a methodology that incorporated regression-modelling techniques and applied it to the MERIT study data. With this methodology, we studied the relationship between the baseline incidence of adverse events and the change that occurred during preparation for and activation of METs. We then compared the findings identified with this approach with those obtained from conventional subgroup analysis.

Methods

The sample recruitment, size calculation, ethical approval, and randomisation scheme for the MERIT study have been described previously [27]. The primary outcome for the MERIT study was the aggregate incidence (adverse events divided by number of eligible patients admitted to the hospital during the study period) of the three adverse events: 1) cardiac arrests without a pre-existing not for resuscitation (NFR) order, 2) unplanned ICU admissions, and 3) unexpected deaths (deaths without an NFR order) occurring in general wards. Secondary outcomes were the incidences of each individual adverse event. Data collection was conducted during a two-month baseline period. This was followed by a four month standardised implementation period during which education was delivered on the concepts and practice changes required with the introduction of the MET and then by a six month study activation period during which the MET system was operational [27]. The MERIT study was approved by the Ethics Committee of the University of New South Wales.

Data collection was conducted in control hospitals during the same time periods. The conduct of the study was not publicised in the control hospitals, and the management and resuscitation committees of the control hospitals agreed that the operation of their cardiac arrest teams would continue unchanged during the study.

Statistical Methods

To test our hypothesis, we used the previously published MERIT data. We set the change in incidence of adverse events as the dependent variable. We then tested for interaction effects (both linear and quadratic) between the baseline incidence of the primary and secondary outcomes and treatment allocation (MET versus control). We used an analytically weighted regression model. We weighted this model by the number of admissions during the study period. This weighting is an extension of the weighted-t test often used in cluster randomised controlled trials [29–31]. This initial approach established the existence of significant linear and quadratic interaction effects for the primary outcome, unexpected cardiac arrests and unplanned ICU admissions. Thus, we conducted the statistical analyses for control hospitals and MET hospitals, separately as described below.

Statistical modelling given significant interaction effects

We analysed the primary outcome and secondary outcomes in the same way. First, we fitted a quadratic model. If the model detected a significant quadratic effect, these results were presented together with the results of the linear effect model for comparison. If the model detected no significant quadratic effect, a linear effect model only was fitted and presented. This was done to minimize the problem of multi-co linearity. This model showed that only unexpected cardiac arrests had negative linear slopes for both groups. We examined the statistical difference between these two slopes by testing the interaction effect between treatment group and the baseline incidence of unexpected cardiac arrests. We then addressed the issues of a) small sample size, b) lack of normality, and c) possible heteroscedasticity. We did this by comparing the analytically weighted regression modelling results using two different methods: the ordinary least square method and the heteroscedasticity-consistent covariance matrix estimation method (HC3) [31, 32]. We also assessed the potential confounding effects of both teaching status and hospital location. We used a hybrid of forced entry and a blocked backward elimination multivariate regression model. The baseline incidence (linear or quadratic) was forced into the model. The block included teaching status and hospital location. The probability of the block exclusion from the final model was set at 0.15. We analytically weighted the regression model by admission volume. We present the results as the predicted regression curves with 95%CI bands, as well as the original data points for each hospital.

Conventional subgroup analysis given significant interaction effects

For the purpose of this analysis, we followed convention and split the sample into two groups: those hospitals equal to or above the median baseline value and those hospitals below the median baseline value for a given outcome. We then used the weighted-t test for the analysis of both sub-groups separately.

All the analyses were performed using Stataâ„¢ 9.2 [33].

Results

The baseline data for hospital and patient characteristics and the numbers for each event have been presented previously [27]. The baseline characteristics of the MET and control hospitals were similar.

Table 1 shows the results of the interaction effects between treatment (MET versus control) and the baseline incidence of the primary and secondary outcomes. There were significant quadratic interaction effects between treatment allocation and the baseline incidence of the primary outcome, unplanned ICU admissions and unexpected deaths. There was no significant interaction effect for unexpected cardiac arrests.

Table 1 Analytically weighted regression results testing the quadratic interaction effects between treatment and baseline incidence for primary outcome

Baseline, study and change in incidence from baseline to study period for the primary and secondary outcomes showed large variability (Table 2). The relationship between baseline incidence, and its change, for both primary and secondary outcomes is presented in Table 3. During the study period, hospital teaching status and location had no significant impact on this relationship (Table 4). Accordingly, we presented the results from the models with baseline incidence only and the predicted curves for these relationships are shown in Figure 1. In MET hospitals, 5 out of 8 Pearson correlation coefficients between baseline, implementation and study period for every outcome were greater than 0.83 with the lowest value at 0.65. In MET hospitals, the greater the baseline incidence of the primary outcome, the greater its reduction during study period. For every 10 additional baseline events per 1000 admissions, there was an additional reduction of 5.92 events (59.2%). Furthermore, the baseline incidence of the primary outcome accounted for 71% of the variance of this change. In comparison, in control hospitals, it accounted for 53% of variance. Sensitivity analysis showed that even after removing the two hospitals with the highest baseline incidence for the primary outcome, the findings were not qualitatively different.

Table 2 Incidence of primary and secondary outcomes in individual hospitals
Table 3 Weighted quadratic or linear regression models to predict changes in incidence of primary and secondary outcomes during the study period
Table 4 Weighted quadratic or linear regression models to predict changes in incidence of primary and secondary outcomes during the study period after adjusting for teaching status and location of the hospitals
Figure 1
figure 1

Relationship between baseline and change in incidence (y axis) for all outcomes during the study period*.* 95%CI band of the predicted curve as well as the original data points presented; the sizes of the scatter points in each graph were drawn proportional to the volume of admissions during the study period; baseline incidence and change in incidence are presented as events per 1000 admissions; for only primary outcome and unexpected death in control hospitals, the figures suggest a quadratic relationship.

For both MET and control hospitals, the greater the baseline incidence of unexpected cardiac arrests detected, the greater its reduction during the study period. For every 10 additional baseline cardiac arrests per 1000 admissions, there was an additional reduction of 9.45 and 7.36 arrests for control and MET hospitals, respectively. Furthermore, the baseline incidence of unexpected cardiac arrests accounted for 78.2% and 85.1% of the variance of this change, respectively. The interaction effect test showed no statistical difference. The baseline incidence had no impact on the change in unplanned ICU admissions in control hospitals. In contrast, in MET hospitals, there was a significant linear relationship between the baseline incidence and its change during the study period. In MET hospitals, for every 10 additional baseline events per 1000 admissions, there was an additional reduction of 5.56 events (55.6%). Furthermore, the baseline incidence of unplanned ICU admissions accounted for 64.3% of the variance of this change. In comparison, in control hospitals, it only accounted for 11.2% of this variance.

The baseline incidence showed a significant quadratic relationship with the change in unexpected deaths in control hospitals and a linear relationship in MET hospitals. In MET hospitals, for every 10 additional baseline unexpected deaths per 1000 admissions, there was an additional reduction of 6.76 events (67.6%). The baseline incidence accounted for 81.9% of the variance of this change. In comparison, in control hospitals, it accounted for 92.3% of this variance.

The relationship between baseline incidence and its change between baseline and implementation periods for each outcome is shown in Table 5. These show that, in MET hospitals, for all outcomes, the greater the baseline incidence, the greater the reduction. In contrast, control hospitals showed a quadratic trend for primary outcome and unexpected deaths. These relationships are shown in Figure 2.

Table 5 Weighted quadratic or linear regression models to predict the changes in incidence of primary and secondary outcomes during the implementation period
Figure 2
figure 2

Relationship between and the baseline and changes in incidence (y axis) for all outcomes during the implementation period*. * 95%CI band of the predicted curve as well as the original data points presented; the sizes of the scatter points in each graph were drawn proportional to the volume of admissions during the implementation period; baseline and change in incidence were presented as events per 1000 admissions; for only primary outcome and unexpected death in control hospitals, the figures suggest a quadratic relationship.

Conventional subgroup analyses results

There was no significant interaction effect (p = 0.081) for a dichotomized modifier using the baseline median value as cut-off. Logically, therefore, there was no need to conduct further subgroup analysis. Nevertheless, the following subgroup analysis was presented for demonstrative purpose. Table 6 shows the results of statistical analysis using the weighted-t test for sub-groups after splitting them according to median baseline values. None of the outcomes showed any statistical significance between MET and control hospitals for both subgroups, except for a borderline significant effect for unexpected cardiac arrests in the subgroup with an incidence lower than the median baseline incidence.

Table 6 Subgroup analysis results using medians of the baseline incidences as the cut-off values

Discussion

We applied a statistical approach that incorporates regression-modelling techniques to the analysis of data from the MERIT study; a cluster randomized controlled trial of the implementation of Medical Emergency Teams (METs). We compared this approach to one based on the conventional method of using a median cut off value to separate groups and then performing subgroup analysis. We found that an approach incorporating regression modelling detected a significant effect of the baseline incidence of the adverse events upon the subsequent effect of introducing METs. In contrast, conventional subgroup analysis did not. The findings based on regression modelling suggest that the baseline incidence of cardiac arrests, unplanned ICU admissions, and unexpected deaths has a significant association with their subsequent reduction after the introduction of a MET. They also indicate that the magnitude of this reduction was proportional to their baseline incidence. Finally, they demonstrate that the choice of statistical analysis plan can significantly affect the interpretation of the outcome data obtained during MERIT [34]

The MERIT study paper was designed to compare the incidence of the primary and secondary outcomes during the study period and powered according to the expected size of the therapeutic effect and the expected incidence and variance of the primary outcome [35]. Accordingly, all the analyses were designed and conducted based on a main effect model with adjustment for baseline incidences. This is similar to an ANCOVA approach [36]. Thus, the baseline incidences of both primary and secondary outcomes were adjusted for in the model, but only as covariates. No interaction effects were tested for. Such strategy didn't examine whether the treatment effect was influenced by the baseline incidence of the study outcomes. It didn't examine the possibility that the outcomes would be, to some extent, affected by their baseline incidence of adverse events (baseline hospital performance). It is plausible for many interventions where the relative risk reduction may relate to different baseline incidences of the primary outcome. In MERIT trial, we could address this issue by empirically testing for the interaction effect between treatment allocation and the baseline incidence of study outcomes. We tested for both linear and quadratic interaction effects in order to avoid potential misspecification of true interactions. Misspecification refers to situations when significant interaction effects and/or higher order effects are omitted from a model. In this case, a simplified model with only a main effect and/or linear effect is a simple but possibly inaccurate reflection of the relationship under investigation. The rationale for introducing a higher order interaction effect was that there was an insufficient theoretical basis to believe that any interaction effect should be linear. This is analogous to introducing a quadratic effect for continuous variables in a main effect model.

Given the above considerations, we set out to test whether choice of statistical technique affected our ability to detect the impact of baseline incidence of adverse events on the effect of introducing MET systems. We used regression modelling techniques to explore the relationship between the baseline incidence of adverse events and its change. The main advantage of using the change of the outcome as the endpoint is in its intuitive interpretation. That is, a lot of system and policy initiatives aim to affect change. The changes incurred by the intervention are often the outcomes we want to understand. Also given that the distribution of baseline incidences was balanced and the before-after correlation was high for the primary endpoint, the efficacy gain of using an ANCOVA approach may have been negligible [36]. We assessed MET and control hospitals separately after we found significant interaction effects and found that, for the primary outcome, unplanned ICU admissions and unexpected deaths, the baseline incidence and their subsequent changes were related in a different way in MET hospitals compared to control hospitals. In MET hospitals, the relationship between the baseline incidence and its change was linear. In control hospitals, this relationship was quadratic for the primary outcome and for unexpected deaths. It was absent for unplanned ICU admissions. We also found that, for MET hospitals, these observations started to apply during the education and training period well before the MET system had been activated. A similar education effect on outcome after introduction of a MET system has been previously suggested [19].

The MERIT main-effect analysis, an unconditional valid analysis, showed that benefit of MET may be small on average. Our conditionally analysis showed that the benefit may still be large among those hospitals with high baseline incidences of the study outcomes. The observations derived from statistical modelling suggest a 'proportionality effect'. That is, the relative change in the incidence of an outcome was similar across all levels of baseline incidence, but the absolute change became greater as baseline incidence increased. This phenomenon has been described with other interventions [28]. For example, a recent study of the impact of quality of care interventions in 3000 U.S. hospitals found that, for 16 out of the 17 process-of-care measures, hospitals with a low level of performance at baseline had the greatest improvement [28].

Our results are consistent with the notion that the effectiveness of interventions in complex organizations may be dependent on specific local characteristics. They also suggest that baseline characteristics of individual hospitals should be explored when assessing the possible effectiveness of interventions directed at system change. The strong predictive effect and the magnitude of the variance of improvement explained by the baseline incidence of outcome variables also suggest that such outcomes may be useful indicators of quality of care. They are relatively easy to measure, define, and report upon in a timely fashion.

The regression towards the mean effect may explain the baseline effect seen in MET hospitals with our statistical modelling approach. Regression toward the mean effect, sometimes called the regression effect, is a statistical principle concerning the relationship between two linked measurements, x and y. It states that if x is above its mean, then the associated y is likely to be closer to its mean than x was. The conventional way to assess data for the presence of this effect in MERIT would be to compare the pattern of results in MET hospitals with those in control hospitals [37]. The difference in the pattern shown in Figure 1 &2 between MET and control hospitals suggests that the distribution of outcomes is unlikely to be explained by such an effect. In addition, the range of regression coefficients for all outcomes increased in magnitude during the study period. Furthermore, in MET hospitals, the correlations between baseline, implementation and study period were high. Such higher correlation makes it less likely that the results are due to regression towards the mean [38]. Moreover, as we adopted a stratified (by teaching status and geographic location: urban versus metropolitan) and blocked randomisation scheme, the baseline incidences distributions were balanced between MET and control hospitals [27]. Stratified randomization and baseline balance also make it less likely that a regression towards the mean effect explains our findings.

Our statistical approach avoided the possibility of misspecification by incorporating a higher order interaction effect. Secondly, by not splitting the sample using a median value cut-off, our approach decreased the problems associated with loss of statistical power. It may be possible to effectively carry out a sufficiently powered conventional subgroup analysis in a very large randomized controlled trial. However, the MERIT study had only a 25% power to detect a 30% difference in the primary outcome and more than 100 hospitals would have been needed to give it sufficient power. These features of the MERIT study make it unlikely that one could detect a statistical difference when comparing 5 to 6 hospitals within two arms. A further aspect of our modelling approach is that, although we analysed MET and control hospitals separately, our conclusions are based on the comparison of the same results between two groups. This approach compared the relationship of baseline performance with study outcome between control and treatment groups and used all data to draw conclusions. Subgroup analysis, instead, assessed treatment effects in both lower and higher baseline performance groups, respectively. This aspect is a further critical factor in making it less likely that the different patterns seen were due to the regression-toward-the-mean phenomenon. The advantage of this regression approach over subgroup analysis may be that it allows for smooth linear or nonlinear relationships with baseline variables. These smooth relationships may be plausible in some settings such as in the MERIT study. However, one potential disadvantage of our approach is that, due to the existence of a significant quadratic interaction effect, it may not be feasible to provide a unique treatment effect estimate (such as absolute risk difference) as would be the case with a conventional approach. Furthermore, the existence of a non-linear interaction effects means that the incremental value of the treatment effect is dependent on the value of the baseline incidence. The baseline period was short compared to the implementation and study periods. This may have influenced our findings. However, we found a strong association between the baseline incidence and its change. The strength of this association provides evidence that the baseline incidence had sufficient predictive validity.

Our analysis was performed post hoc. Furthermore, the number of hospitals included in the study was relatively small. However, sensitivity analysis showed that even after removing the two hospitals with the highest baseline incidence for the primary outcome, the findings were not qualitatively different. Nonetheless, our results should be considered preliminary and they should be confirmed in future studies with similar design.

Conclusions

In summary, we used regression modelling to test the hypothesis that the baseline incidence of cardiac arrests, unplanned ICU admissions, and unexpected deaths has a significant influence on the change in their incidence that could be achieved through the introduction of a MET. Our findings support this hypothesis. We also found that the magnitude of this reduction was proportional to the baseline incidence of adverse events such that the higher the baseline incidence, the greater the absolute reduction. These differences were consistent with the initial findings of the significant interaction effects. We found that these observations were in contrast to those obtained using conventional sub-group analysis. They suggest that the choice of statistical analysis can significantly affect the interpretation of the findings of the MERIT study. They also raise concerns about the robustness of conventional sub-group analysis in similar settings.

Abbreviations

MET:

Medical Emergency Team. ICU: Intensive Care Unit

MERIT:

Medical Early Response Intervention & Therapy

ANCOVA:

Analysis of Covariance.

References

  1. Donaldson MS, Kohn LT, Corrigan J: To err is human: building a safer health system. 2000, Washington, D.C: National Academy Press

    Google Scholar 

  2. Institute of Medicine: Crossing the quality chasm: A new health system for the 21st century. 2001, Washington D.C.:National Academies Press

    Google Scholar 

  3. Aspden P, Institute of Medicine (Committee on the Work Environment forNurses and Patient Safety): Patient safety: achieving a new standard for care. 2004, Washington, D.C:National Academies Press

    Google Scholar 

  4. Byers JF, White SV: Patient safety: principles andpractice. 2004, New York, NY: Springer

    Google Scholar 

  5. Child AP, Institute of Medicine (Committee on the Work Environment forNurses and Patient Safety): Keeping patients safe: transforming the work environment of nurses. 2004, Washington, D.C:National Academies Press

    Google Scholar 

  6. United States, Congress, House, Committee on Energy and Commerce: Patient Safety and Quality Improvement Act report (to accompany H.R. 663) (including cost estimate of the Congressional Budget Office). 2003, Washington, D.C:U.S. G.P.O

    Google Scholar 

  7. Argyris C: On organizational learning. 1999, Malden, Mass: Blackwell Business, 2

    Google Scholar 

  8. Jackson MC: Systems thinking: creative holism for managers. 2002, Chichester: Wiley

    Google Scholar 

  9. Green LW, Kreuter MW, Green LW: Health program planning: an educational and ecological approach. 2005, New York: McGraw-Hill, 4

    Google Scholar 

  10. Windsor RA: Evaluation of health promotion, health education, and disease prevention programs. 2004, Boston: McGraw-Hill, 3

    Google Scholar 

  11. Hillman K, Chen J, Brown D: A Clinical Model for Health Services Research - The Medical Emergency Team. J Crit Care. 2003, 18 (3): 195-199. 10.1016/j.jcrc.2003.08.011.

    Article  PubMed  Google Scholar 

  12. Hillman K, Parr M, Flabouris A, Bishop G, Stewart A: Redefining in-hospital resuscitation: the concept of the medical emergency team. Resuscitation. 2001, 48 (2): 105-110. 10.1016/S0300-9572(00)00334-8.

    Article  CAS  PubMed  Google Scholar 

  13. Braithwaite RS, DeVita MA, Mahidhara R, Simmons RL, Stuart S, Foraida M, Medical Emergency Response Improvement Team (MERIT): Use of medical emergency team (MET) responses to detect medical errors. Quality and Safety in Health Care. 2004, 13: 255-259. 10.1136/qshc.2003.009324.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Lee A, Bishop G, Hillman K, Daffurn K: The Medical Emergency Team. Anaesth Intensive Care. 1995, 23: 183-186.

    CAS  PubMed  Google Scholar 

  15. Audit Commission: Critical to success the place of efficient and effective critical care services within the acute hospitals. London. 1999

    Google Scholar 

  16. Department of Health (UK): Critical Care Outreach. 2003, Department of Health (UK)

    Google Scholar 

  17. Institute of Health Improvement USA: Getting Started Kit: Rapid Response Teams. http://www.ihi.org/ihi

  18. Ball C, Kirkby M, Williams S: Effect of the critical care outreach team on patient survival to discharge from hospital and readmission to critical care: non-randomised population based study. BMJ. 2003, 327 (7422): 1014-10.1136/bmj.327.7422.1014.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Bellomo R, Goldsmith D, Uchino S, Buckmaster J, Hart GK, Opdam H, Silvester W, Doolan L, Gutteridge G: A prospective before-and-after trial of a medical emergency team. Med J Aust. 2003, 179 (6): 283-7.

    PubMed  Google Scholar 

  20. Braithwaite RS, DeVita M, Stuart S, Foraida M, Simmons RL: Can cardiac arrests be prevented in hospitalised patients? Results of a medical crisis response team (Condition C). Journal of General Internal Medicine. 2003, 18 (Supplement 1): 222-223.

    Google Scholar 

  21. Buist MD, Moore GE, Bernard SA, Waxman BP, Anderson JN, Nguyen TV: Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: preliminary study. BMJ. 2002, 324 (7334): 387-90. 10.1136/bmj.324.7334.387.

    Article  PubMed  PubMed Central  Google Scholar 

  22. DeVita MA, Schaefer J, Lutz J, Dongilli T, Wang H: Improving medical crisis team performance. Crit Care Med. 2004, 32 (2 Suppl): S61-S65. 10.1097/01.CCM.0000110872.86812.1C.

    Article  PubMed  Google Scholar 

  23. DeVita MA, Braithwaite RS, Mahidhara R, Stuart S, Foraida M, Simmons RL, Medical Emergency Response Improvement Team (MERIT): Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care. 2004, 13 (4): 251-254. 10.1136/qshc.2003.006585.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Kenward G, Castle N, Hodgetts TJ, Shaikh L: Evaluation of a Medical Emergency Team one year after implementation. Resuscitation. 2004, 61: 257-263. 10.1016/j.resuscitation.2004.01.021.

    Article  PubMed  Google Scholar 

  25. Leary T, Ridley S: Impact of an outreach team on re-admissions to a critical care unit. Anaesthesia. 2003, 58 (4): 328-332. 10.1046/j.1365-2044.2003.03077.x.

    Article  CAS  PubMed  Google Scholar 

  26. Pittard AJ: Out of our reach? Assessing the impact of introducing a critical care outreach service. Anaesthesia. 2003, 58 (9): 882-885. 10.1046/j.1365-2044.2003.03331.x.

    Article  CAS  PubMed  Google Scholar 

  27. The MERIT investigators: Introduction of medical emergency team (MET) system - a cluster-randomised controlled trial. Lancet. 2005, 365: 2091-2097. 10.1016/S0140-6736(05)66733-5.

    Article  Google Scholar 

  28. Williams S, Schmaltz SP, Morton DJ, Koss RG, Loeb JM: Quality of Care in U.S. Hospitals as Reflected by Standardized Measures, 2002-2004. The New England Journal of Medicine. 2005, 353: 255-264. 10.1056/NEJMsa043778.

    Article  CAS  PubMed  Google Scholar 

  29. Campbell MK, Mollison J, Steen N, Grimshaw JM, Eccles M: Analysis of cluster randomized trials in primary care: a practical approach. Family Practice. 2000, 17 (2): 192-196. 10.1093/fampra/17.2.192.

    Article  CAS  PubMed  Google Scholar 

  30. Kerry M, Bland JM: Analysis of a trial randomized in clusters. Br Med J. 1998, 316: 54-54.

    Article  CAS  Google Scholar 

  31. Long J, Ervin L: Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model. The American Statistician. 2000, 217-224. 10.2307/2685594. 54

  32. MacKinnon JG, White H: Some heteroskedasticity consistent covariance matrix estimators with improved finite sample properties. Journal of Econometrics. 1985, 29: 305-325. 10.1016/0304-4076(85)90158-7.

    Article  Google Scholar 

  33. StataCorp: Stata statistical software: Release 8.2. 2004, College Station, Texas: Stata Corporation

    Google Scholar 

  34. Winters BD, Pham J, Pronovost PJ: Rapid Response Teams - Walk, Don't Run. JAMA. 2006, 296 (13): 1646-1647. 10.1001/jama.296.13.1645.

    Article  Google Scholar 

  35. Kerry M, Bland JM: Statistical notes: Sample size in cluster randomization. BMJ. 1998, 316: 549-549.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Vickters A, Altman D: Statistical notes: Analysing controlled trials with baseline and followup measurements. BMJ. 2001, 323 (7321): 1123-1124. 10.1136/bmj.323.7321.1123.

    Article  Google Scholar 

  37. Morton V, Gorgerson DJ: Effect of regression to the mean on decision making in health care. BMJ. 2003, 1083-1084. 10.1136/bmj.326.7398.1083. 326

  38. Bland JM, Altman DA: Statistical Notes: regressiontowards the mean. BMJ. 1994, 1499-308

Download references

Acknowledgements

The MERIT study is a collaboration of the Simpson Centre for Health Services Research and the Australian and New Zealand Intensive Care Society Clinical Trials Group. The study was funded by grants from the National Health and Medical Research Council of Australia, the Australian Council for Safety and Quality in Health Care and the Australian and New Zealand Intensive Care Foundation as part of the MERIT study.

Management Committee: Ken Hillman (Study Chair), Simon Finfer (Study Vice-chair), Rinaldo Bellomo, Daniel Brown, Michelle Cretikos, Jack Chen, Gordon Doig, Arthas Flabouris and David Sanchez.

Steering Committee: Ken Hillman (Chair), Jennifer Bartlett, Rinaldo Bellomo, Daniel Brown, Michael Buist, Jack Chen, Michelle Cretikos, Michael Corkeron, Gordon Doig, Simon Finfer, Arthas Flabouris, Michael Parr, Sandra Peake and John Santamaria.

Site Investigators and Research Co-ordinators (in alphabetical order):

Australian Capital Territory:

Calvary Hospital - Marielle Ruigrok, Margaret Willshire

Canberra Hospital -David Elliott, John Gowardman, Imogen Mitchell, Carolyn Paini, Gillian Turner

New South Wales:

Broken Hill Hospital - Coral Bennett, Linda Lynott, Mathew Oliver, Linda Peel Sittampalam Ragavan, Russell Schedlich Gosford Hospital - John Albury, Sean Kelly

John Hunter Hospital - Ken Havill, Jane O'Brien

Prince of Wales Hospital - Harriet Adamson, Yahya Shehabi,

Royal North Shore Hospital - Simeon Dale, Simon Finfer

Wollongong Hospital - Sundaram Rachakonda, Kathryn Rhodes, E. Grant Simmons

Wyong Hospital - John Albury, Sean Kelly

Queensland:

Mackay Hospital - Kathryn Crane, Judy Struik

Redcliffe Hospital - Matthys Campher, Raymond Johnson, Sharon Ragau, Neil Widdicombe

Redland Hospital - Susan Carney, David Miller

Townsville Hospital - Michelle Barrett, Michael Corkeron, Sue Walters

South Australia:

Flinders Hospital - Tamara Hunt, Gerard O'Callaghan

Queen Elizabeth Hospital - Jonathan Foote, Sandra Peake

Repatriation General Hospital - Gerard O'Callaghan, Vicki Robb

Royal Adelaide Hospital - Marianne Chapman, Arthas Flabouris, Deborah Herewane, Sandy Jansen

Victoria:

Bendigo Hospital - John Edington, Kathleen Payne

Box Hill Hospital - David Ernest, Angela Hamilton

Geelong Hospital - David Green, Jill Mann, Gary Prisco

Monash Hospital - Laura Lister, Ramesh Nagappan,

St. Vincent's Hospital - Jenny Holmes, John Santamaria

Wangaratta Hospital - Chris Giles, Debbie Hobijn

Author information

Authors and Affiliations

Authors

Consortia

Corresponding author

Correspondence to Jack Chen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

JC contributed to the conceptualisation, all data analyses and prepared the first draft of the paper. AF, RB, KH and SF contributed to the conceptualisation and critical writing of the paper. All authors approved the final draft of the paper and have the access to the data used in generating the paper. The draft was approved by the ANZICS Clinical Trials Group Executive Committee.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chen, J., Flabouris, A., Bellomo, R. et al. Baseline hospital performance and the impact of medical emergency teams: Modelling vs. conventional subgroup analysis. Trials 10, 117 (2009). https://doi.org/10.1186/1745-6215-10-117

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1745-6215-10-117

Keywords