Skip to main content

Alcohol email assessment and feedback study dismantling effectiveness for university students (AMADEUS-1): study protocol for a randomized controlled trial

Abstract

Background

Alcohol causes huge problems for population health and for society, which require interventions with individuals as well as populations to prevent and reduce harms. Brief interventions can be effective and increasingly take advantage of the internet to reach high-risk groups such as students. The research literature on the effectiveness of online interventions is developing rapidly and is confronted by methodological challenges common to other areas of e-health including attrition and assessment reactivity and in the design of control conditions.

Methods/design

The study aim is to evaluate the effectiveness of a brief online intervention, employing a randomized controlled trial (RCT) design that takes account of baseline assessment reactivity, and other possible effects of the research process. Outcomes will be evaluated after 3 months both among student populations as a whole including for a randomized no contact control group and among those who are risky drinkers randomized to brief assessment and feedback (routine practice) or to brief assessment only. A three-arm parallel groups trial will also allow exploration of the magnitude of the feedback and assessment component effects. The trial will be undertaken simultaneously in 2 universities randomizing approximately 15,300 students who will all be blinded to trial participation. All participants will be offered routine practice intervention at the end of the study.

Discussion

This trial informs the development of routine service delivery in Swedish universities and more broadly contributes a new approach to the study of the effectiveness of online interventions in student populations, with relevance to behaviors other than alcohol consumption. The use of blinding and deception in this study raise ethical issues that warrant further attention.

Trial registration

ISRCTN28328154

Peer Review reports

Background

Alcohol causes huge problems, both for population health and for society more broadly. It is responsible for approximately 4% of the global burden of disease, similar to tobacco, with a greater impact in high-income countries and among men, for example accounting for 11% of all male deaths in WHO European Region in 2004 [1]. Population-level interventions that seek to influence the price, availability and cultural acceptability of hazardous and harmful drinking may be complemented by individual-level brief interventions delivered in health systems and elsewhere [2]. Brief interventions are typically offered opportunistically by non-specialists in routine contacts with patients attending healthcare services for other reasons and take only a few minutes to deliver [2, 3].

Evidence for the effectiveness of brief interventions is based on randomized controlled trials and systematic reviews, which have consistently identified small effects on drinking behavior and related problems [4–6]. Although large-scale implementation programs are relatively recent, there have been longstanding difficulties in persuading generic health and welfare practitioners to embrace this work in routine practice [7]. The widespread use of computers and the internet offer other ways to reach large numbers of hazardous and harmful drinkers which overcome implementation problems due to practitioner reluctance to discuss drinking [7]. It may also be cheaper to implement online interventions and more acceptable to those targeted, though these features will not be important to public health unless interventions can also be demonstrated to be effective [8, 9].

The research literature in this area is at an early stage of development but is evolving quickly. A number of recent systematic reviews provide preliminary evidence of effectiveness for a range of computerized interventions [10, 11]. However, there are also examples of apparently well designed interventions (for example, [12]) not being found to be effective [13]. This highly naturalistic study was undertaken in a web browser population, and interpretation is complicated by the many unresolved methodological problems impeding progress in the evaluation of the effectiveness of online interventions [14]. Careful selection of study populations has been successfully used to demonstrate effectiveness in general population samples [15, 16].

University student populations are very prominent in the existing literature, having been relatively extensively studied compared to non-students [11]. Heavy drinking among university students is a seemingly unremarkable and age-old phenomenon that is now globalized [17]. Given the well established role of heavy drinking in student cultures and the extent of internet use among students it is perhaps unsurprising that trials of internet interventions to promote safer drinking have been undertaken [18–20]. There are now effectiveness reviews of normative feedback interventions delivered in various ways among students [21] and of computerized interventions for this population [22].

Most previous student studies have required participants to attend laboratory or other controlled settings, rather than allowing access to interventions using their own computers [10, 11, 23]. Few of the published studies to date have described projects that have made comprehensive use of electronic media, by recruiting large numbers of participants via email, or allowed participants to engage with interventions naturalistically, when, where and how preferred by the participants themselves. Thus, most of the existing research has been undertaken in relatively artificial efficacy conditions, not closely resembling how interventions found to be effective would be routinely delivered [23].

The key exception to this is the research program on electronic screening and brief intervention (e-SBI) by Kypri and colleagues in Australia and New Zealand. Following earlier studies which recruited participants in student healthcare services [24–26], THRIVE was a large-scale effectiveness trial which invited 13,000 students to participate and subsequently randomized 2,435 risky drinkers to intervention or control conditions, with 2,050 providing follow-up data after either 1 or 6 months or both [27]. After 1 month there was a 17% reduction in alcohol consumption in the assessment and feedback group compared to a non-intervention assessment-only group. This had attenuated to an 11% difference after 6 months [27]. Currently under way in New Zealand are the first multisite large-scale effectiveness trials [8]. There remain many important knowledge gaps concerning the effectiveness of online interventions, including basic questions such as whether effectiveness is established and what effect sizes may be expected, under what conditions, for which populations and with which specific content (for example is feedback required, and if so should it be normative), as well as for different delivery models and across cultures.

Like their peers in other countries Swedish students also drink heavily. One recent study at Linköping University found that heavy episodic (or binge) drinking was normative, being reported by majorities of both sexes and almost three-quarters of all males [28]. The e-SBI model originally developed by the Lifestyle Intervention Research group at Linköping University was originally conceived within an effectiveness framework [29]. It is based upon an initial email to students from the student healthcare service, providing a link to a website for assessment and feedback. The core e-SBI content involves feedback on recommended limits of alcohol consumption and normative comparisons of drinking with Swedish students of the same age and sex. Previous research by this group has shown that this e-SBI model is a feasible way of reaching large numbers of students in ways permitting the conduct of effectiveness trials to evaluate detailed intervention content [29]. The first trial of this intervention found no differences between brief and more extensive normative feedback content though attrition problematically reduced available sample size [30]. Building upon the initial trial we undertook a further study as an unusually large pilot study with outcome data provided by 2,400 students to prepare for the trial described here. A key aim was to improve study retention and the implications of this pilot work for preparation and design of the present study are described in detail in the Methods section (a report is also being prepared for publication).

Among the issues that have presented difficulties in developing this area of research are uncertainties about the most appropriate control groups, due largely to overlap and similarities between feedback and assessment content [31]. Alcohol researchers have long been interested in the possibility that assessment or screening of alcohol consumption per se can reduce drinking [4, 32, 33]. It has not been unusual to observe reductions in drinking in non-intervention control groups of the order of 20% at later follow-ups [34, 35]. These uncertainties have been made explicit by randomized controlled trials demonstrating apparent effects of assessment procedures, including when undertaken online [36] and when limited to screening alone [37]. In the latter case, pen and paper completion of the Alcohol Use Disorders Identification Test (AUDIT), a ten-item alcohol screening questionnaire [38], alone led to self-reported reduced drinking. While this could have been due to the specific effects of answering questions, it could also have been a Hawthorne effect in response to having one’s drinking studied [39], which this group may have inferred while the control group could not have done [37]. This entire literature is necessarily based upon self-reported drinking outcomes. Self-reports have been found to be reliable in alcohol treatment contexts [40] but have been little studied in brief intervention study contexts [41]. Assessment effects have also been conceptualized and studied in different ways in a range of disciplines and fields of research, and these have identified effects upon objectively ascertained outcomes in randomized trials [42–44].

In a recent systematic review of existing trials of randomized evaluations of assessment reactivity in brief intervention studies, effects were found to be of a somewhat smaller magnitude than typical brief intervention effects, hovering around the threshold for statistical significance, and being within the lower end of the confidence intervals for meta analytic estimates of effects [45]. When attention was restricted to university student populations, however, stronger effects were apparent, similar in magnitude to those of brief interventions themselves [45]. If simply by answering questions on one’s drinking, however, does subsequently lead to reduced drinking, large-scale implementation of simple screening surveys as interventions among university students might have a considerable public health potential [29]. This suggests also a need to demonstrate that more elaborate online interventions provide additional effects that are also acceptable to those targeted as well more cost effective than less elaborate interventions. There are as yet sparse data available on costs and cost effectiveness [46].

Making unbiased comparisons to guide decision making about alternative courses of action is the fundamental business of trials and assessment effects may introduce bias in studies of behavior change in ways that are not widely appreciated [47]. Better understanding the effects of assessment per se on drinking behavior should inform thinking both about intervention and control content. Other aspects of taking part in trials, apart from assessment reactivity, may also influence both research participation dynamics and the behaviors being studied and this possibility warrants dedicated studies [48]. For example, precisely what we expect participants to do in trials may well have implications for retention [49].

The overall aim here is to evaluate the effectiveness of e-SBI, employing an RCT design that takes account of baseline assessment reactivity and other possible effects of the research process [48]. Alcohol trials without any form of baseline assessment are rare and this situation hampers evaluation of the true effects of brief alcohol interventions whose content includes assessment [33, 45]. Even more rare are studies that eschew typical trial recruitment processes due to concerns about interference with study aims, though they do exist (for example, [50]) and such Zelen designs [51] have been widely used in other areas [52]. The present study design will allow exploration of the magnitude of the feedback and assessment component effects, and is specifically designed to constrain possible effects of research participation on the control group, in testing e-SBI effectiveness.

Methods/design

Design

This is a three-arm parallel groups trial in which routine provision of e-SBI (group 1) is compared with assessment-only (group 2) and no contact control (group 3) study conditions. Groups 1 and 2 will complete identical assessments, the sole difference between them being that group 1 will receive normative feedback as usual whereas group 2 will not. Group 3 will only be contacted after 3 months, at which time both groups 1 and 2 also complete outcome data collection.

Hypotheses

There are four main hypotheses, as follows:1) drinking in groups 1 and 3 will differ, with group 1 drinking less, providing a test of the effects of universal e-SBI provision in an unselected population of university students; (2) drinking in groups 2 and 3 will differ, with group 2 drinking less, providing a test of the effects of assessment-only in an unselected population of university students; (3) drinking in groups 1 and 2 will differ, with group 1 drinking less, providing a test of the effects of adding feedback to assessment-only among those who were risky drinkers participating at study entry; (4) drinking in groups 1 and 2 will differ, with group 1 drinking less, providing a test of the effects of adding feedback to assessment-only in an unselected population of university students.

Three hypotheses concern possible effects in unselected populations, that is, without reference either to drinking behavior or to earlier study participation. It is further hypothesized that these effects will be present among those whose drinking is determined to be potentially hazardous, that is, excluding non-drinkers and very infrequent drinkers (see Outcomes evaluation).

Participants and setting

The study will be performed simultaneously at two different universities in Sweden, one in the northern part of Sweden, Luleå, and one in the southern part, Linköping. These institutions have been selected on the basis of previously conducted research involving the local student healthcare services who are responsible for alcohol interventions [29, 30]. All students at both universities during the autumn 2011 term (that is, in terms 1, 3 and 5) will be included and all will be offered routine e-SBI provision during this term at the end of the study in addition to brief lifestyle feedback provided by the study (see below). Students in a single year group at one of the universities participated in a pilot study a year earlier (see below).

Randomization and other study procedures

Email addresses will be collected from both official university registers in 3 separate data files, 1 for each year, approximately 15,300 in total. Sequence generation involved each participant being given a random number between 0.0 and 1.0 with 2 decimals in OpenOffice Calc 3.1. All participants have a 1/3 probability of allocation to any particular study condition. Randomization is fully computerized, does not employ any strata or blocks within each year at each university, and is not possible to subvert as all subsequent study processes are fully automated. The initial email to groups 1 and 2 is sent from the student healthcare services as usual, with the restriction that feedback is not offered to group 2. No contact is made at this point in time with group 3.

Then, 3 months later all three groups are sent an identical email by the Swedish Principal Investigator (PB). This makes no reference to alcohol nor to the previous email from the student healthcare services and comprises an invitation to participate in an online lifestyle survey with a 15-item questionnaire. Study drinking outcomes are derived from three questions in this survey. There are then two reminders containing a link to the questionnaire followed by a third reminder with three questions (one on alcohol) embedded in the body of the email. Brief lifestyle feedback is provided to those completing the lifestyle survey. See Figure 1 for an overview of the process.

Figure 1
figure 1

Flowchart for the Amadeus-1 trial.

Intervention content

Intervention delivery begins with receipt of an email. Every student has a personal university email address that is obligatory to use. All official mails are delivered through this address. The third author (MB) sends out the initial emails to the individual students on behalf of the student healthcare centers. Groups 1 and 2 both complete an alcohol assessment instrument comprising ten items. Group 1 then receives feedback whereas group 2 are simply thanked for their participation and offered a link to a commonly used alcohol website without content understood to be effective in assisting behavior change. The hyperlink contained within the body of the email is no longer valid after completion of the questionnaire when the responses are stored in the study database. This prevents multiple responses, while allowing the questionnaire to be completed in more than one session if required.

Group 1 receives feedback immediately upon completion of the assessment consisting of three statements summarizing their weekly consumption, their frequency of heavy episodic drinking and their highest blood alcohol concentration over the last 4 weeks, comparing drinking patterns against the safe drinking limits established by the Swedish Institute for Public Health [30]. After this follows comprehensive normative feedback with information describing participants’ alcohol use compared to their peers in Swedish universities, and, if applicable, personalized advice concerning the importance of reducing any unhealthy levels or pattern of consumption. The feedback can be printed out by the student. A demonstration version of the assessment and feedback intervention can be viewed at http://demo.livsstilstest.nu. All participants will be offered routine practice alcohol e-SBI at the end of the study within the same term.

Blinding

Groups 1 and 2 are unaware that they are participating in a research study when they respond to the initial emails. Both groups are given to understand that these emails are provided as routine practice by the student healthcare centers to encourage students to consider their drinking. Thus, all three groups are unaware they are participating in an intervention study and that they have been randomized. Subsequently at follow-up, no explanation of the true nature of the study is given to students. Instead they are invited to participate in a seemingly unrelated cross-sectional lifestyle survey without any particular focus on drinking behavior (see Attrition). As all study procedures are automated, the research team has no direct contact with study participants. The use of blinding and deception in this trial raises ethical issues (see Discussion). The study was approved by the Regional Ethical Committee in Linköping, Sweden (number: 2010/291-31 on 12 October 2010).

Sample size

The marginal costs involved in increasing the numbers to whom e-SBIs are delivered are negligible. Therefore even very small effects are likely also to be cost effective above the basic threshold cost involved in providing the service. These observations also apply to undertaking research to evaluate effectiveness and suggest that sample size should be as large as possible to detect very small effects. The pilot study indicated that any between-group differences are likely to be very small and it is a moot point as to whether this study should be even larger. Our power calculation assumed no difference between the 2 universities, and a follow-up rate of 50% (see Attrition below; 7,650 of a study population of approximately 15,300). For 3 groups of 2,550 this yields approximately 90% statistical power to detect an effect size of 0.09 standard deviations between any 2 contiguous pairs (group 1 vs group 2 or group 2 vs group 3) or 0.18 between group 1 and group 3. Alternatively we have in excess of 80% power to detect effects of 0.08 and 0.16 standard deviations, respectively.

Attrition

Attrition has been a major source of difficulty in previous work in developing e-SBI in Linköping and other Swedish universities [30]. It is also a significant problem in the conduct of online trials in other populations [14]. The initial take-up of the routine service provision of e-SBI has varied between 10% to 60% in different universities due to varying patterns of email use and rates of hazardous drinking, as well as the salience of alcohol and interest in intervention. In the selected universities take-up rates have been consistently around 40%. In previous follow-up studies, less than half of those who participated at baseline did so at first follow-up and approximately one-quarter participated in second follow-ups [29, 30].

A different approach was taken in the pilot study to address the issue of attrition within the same three-group design structure as outlined here (our forthcoming report will contain further details). Rather than follow-up emails being sent by the student healthcare service as was performed previously, blinding of participants to trial conduct was implemented. This involved an explicit attempt to separate the experience of follow-up from earlier e-SBI delivery. An email was sent by the second author (PB) requesting participation in a survey of student alcohol consumption, partially following the approach of Kypri and colleagues who invited participation in a series of surveys at the outset and who obtained high follow-up rates [27]. Incentives in the form of cinema tickets were also offered in the pilot study [53].

This was only partially successful. Participation rates in ‘follow-up’ were slightly higher than at baseline (approximately 41% compared to 37%) in the two groups randomized to earlier contact. This comprised both the involvement of some who had not previously participated as well as attrition among some of those who had. While this measure had restricted the reduction in follow-up seen previously, it introduced a new problem; differential participation by group 3 (approximately 52%) comprised the equivalence of the three groups. By virtue of randomization, there was a strong basis for inferring that this could only have been caused by the earlier involvement with the study. The earlier invitation to participate in the alcohol e-SBI was not sufficiently different from the later alcohol survey. To rectify this, we decided for the main trial to abbreviate the alcohol outcome measures and conceal them within a lifestyle questionnaire in the follow-up study. It was reasoned that we had simply not gone far enough with our earlier attempt. Note that this extends blinding to a specific focus on alcohol at the point of outcome data collection. We will also add a third reminder at follow-up with the option of completing three items in the body of the email (as well as the hyperlink).

Outcomes evaluation

Intention-to-treat analyses are primarily used in clinical trials to address problems with lack of compliance with allocated interventions [54, 55]. In the present context the intervention comprises an automated email providing a means of accessing a website in an unselected population in which the prevalence of hazardous and harmful drinking is elevated. Lack of take-up of intervention is arguably more fundamentally a matter of reach rather than of effectiveness here, as there are not noteworthy costs associated with lack of take-up, though this situation does complicate evaluation of effectiveness and it is recommended that an attempt is made to account for all randomized participants [55]. The intervention could be defined more narrowly as delivered to those who access the website, with the email merely being the means of recruitment. Even if this definition is applied, the intervention will still be accessed by students whose drinking is not risky and who would thus not be deemed to merit individual targeting for intervention. More narrowly still, outcome evaluation could be restricted to those whose drinking is found to be risky. The overarching problem is that a greater number of people are randomized than would be targeted for intervention. Consideration of outcomes evaluation needs to take account of these issues.

By necessity, inferences involving group 3 can only be drawn in entirely unselected populations, as there are no baseline data with which one might construct subgroups randomized or otherwise for comparative purposes. Hence the form of hypotheses 1 and 2, which should be noted as highly conservative approaches to outcome evaluation as they will unavoidably include data that will bias the findings towards the null (both from non-participants at baseline and those who are not risky drinkers). To address this problem, analyses shall also be undertaken which exclude those determined in follow-up data to be unlikely to have been hazardous drinkers at study entry. Specifically non-drinkers and very infrequent drinkers (reporting never drinking or monthly or less frequently to AUDIT-C item 1; see below) will be excluded. This is also somewhat conservative to the extent that intervention effects may lead participants to drink rarely or not all but this does help with the problem previously described.

Hypotheses 3 and 4 concern only groups 1 and 2 and provide alternative ways of evaluating the specific effects of feedback. Hypothesis 3 is preferred as it addresses this question among the most relevant subpopulation of risky drinkers exposed to intervention who also later participated in follow-up and this will be the primary analysis for this question. It is acknowledged that the departure from intention-to-treat (ITT) implies a risk of bias. It is also judged that methods such as complier-average causal effect analysis are inappropriate due to the unusual study context. This reasoning for preferring a per protocol analysis over an intention-to-treat one has previously been applied in online alcohol trials [9].

Outcome measures

Three items within the 15-item survey instrument are dedicated to assessment of drinking, alongside questions on smoking, diet and physical activity and sociodemographic characteristics. These items are the three questions of the AUDIT-C [56]. The two primary outcomes are AUDIT-C scores and the proportions of risky drinkers (according to the Swedish definition [57]). The three secondary outcomes are the component items of the AUDIT-C; number of heavy episodic drinking episodes per month, frequency of drinking and typical quantity consumed. The psychometric properties of these data when administered online have been found to be reliable in similar student populations [56, 58].

Data analyses

Intention-to-treat analyses in relation to hypotheses 1, 2 and 4 will be undertaken among all those providing follow-up data without any imputation for missing data. The only additional analyses here will consider how informative are patterns of response to reminders in relation to those not participating at follow-up and no attempt will be made to account for the randomized populations as a whole [55]. The per-protocol analysis for hypothesis 3 will be undertaken among all those in groups 1 and 2 who are risky drinkers at baseline who participate in follow-up. We will also include baseline measures of risk as covariates and examine patterns of attrition in these two groups and their possible impact on findings. We will undertake exploratory analyses of possible effect modification by university, term, faculty, age and gender. Effect sizes will be calculated as standardized mean differences or as ratio measures of effect. Student t tests and χ 2 tests will be used supplemented by regression-based analyses as needed. All analyses will conform to a prespecified plan with data transformations appropriate for skewed data to be determined. There will be no interim analyses or stopping rules.

Focus group substudy

Guidance on the use of deception in research indicates that debriefing of participants occur as soon as practically possible (for example, [59]). This is usually simple and practical with relatively small numbers in psychology laboratories and other settings in which deception is usually used. While we agree that is important that debriefing should be done, it is not clear, however, how it should be done in the context of an online study such as this with large numbers of participants. For example, if we did this by email and received a small number of extremely unhappy comments, it is not obvious how these should be interpreted or handled. For these reasons we plan to defer debriefing all participants until after we explore in-depth participant views on the acceptability of the deception used and on appropriate debriefing methods. We will convene focus groups for this purpose. At the end of follow-up, we will ask all participants for a phone number for recruitment to a focus group interview on participation in research. Assuming numbers permit, we will randomly select participants to run two focus group sessions, one at each university, aiming for five participants each from groups 1 to 3 in both focus groups.

Discussion

Universities have an obligation to create an environment where student drinking does as little harm as possible, both for the students’ short-term well being and educational attainment and to mitigate longer-term societal consequences. There are approximately 100,000 new students entering universities in Sweden each year. The Swedish National Institute of Health has decided to implement e-SBI in all universities in Sweden but the effectiveness of such interventions offered in this way is still unclear, as is the extent to which effectiveness may be specifically the product of feedback rather than the assessment components. These Swedish developments have been informed by, and themselves inform, the rapidly evolving research and practice contexts in other countries.

The main methodological innovations presented here concern the randomization and complete non-contact of the control group at baseline and the nature of the blinding practiced at follow-up in order to constrain differential attrition. Randomization of participants in excess of those usually targeted for intervention presents inferential difficulties. In the present study an unusual opportunity to dismantle existing routine practice in interventions delivery and also to omit usual trial entry procedures for individually randomized trials, for both substantive intervention effectiveness evaluation and for methodological purposes. Consideration will later need to be given to the value of this unusual research design as well as to possible alternatives such as cluster randomization.

This study addresses quite a number of significant methodological challenges. We aim to email approximately 15,300 individuals and recruitment and retention of participants equivalently between groups on this scale has not previously been attempted in a brief alcohol intervention trial to our knowledge. Outcomes evaluation is complex and successfully and reliably capturing any small effects that do exist is highly relevant to large-scale public health endeavors to influence health-compromising behaviors.

The methodological basis of decisions to implement blinding and deception has been outlined above, with the former characterizing baseline contacts and the latter involved in follow-up. Similar decisions have been made by Kypri and colleagues for similar reasons, the small effects under study being judged likely to be adversely impacted by research participation artifacts which will introduce bias [8]. This methodological imperative runs counter to the ethical imperative of informed consent. We believe these issues are sufficiently important to warrant extended consideration and are writing a paper that outlines our thinking about the ethical issues involved. We have also incorporated a focus group substudy here for in-depth debriefing on the nature of the deception used, exploration of participant responses, and to aid decision making about the methods and content of debriefing. Methodological progress in this area of work needs to proceed in tandem with the development of ethical considerations.

Trial status

At the time of submission participants had been randomized and groups 1 and 2 had received the initial emails. No participants had yet received the invitations to participate in the seemingly unrelated cross-sectional lifestyle survey.

References

  1. Rehm J, Mathers C, Popova S, Thavorncharoensap M, Teerawattananon Y, Patra J: Global burden of disease and injury and economic cost attributable to alcohol use and alcohol-use disorders. Lancet. 2009, 373: 2223-2233. 10.1016/S0140-6736(09)60746-7.

    Article  PubMed  Google Scholar 

  2. Room R, Babor T, Rehm J: Alcohol and public health. Lancet. 2005, 365: 519-530.

    Article  PubMed  Google Scholar 

  3. Nilsen P, McCambridge J, Karlsson N, Bendtsen P: Brief interventions in routine health care: a population-based study of conversations about alcohol in Sweden. Addiction. 2011, 106: 1748-1756. 10.1111/j.1360-0443.2011.03476.x.

    Article  PubMed  Google Scholar 

  4. Bien TH, Miller WR, Tonigan SJ: Brief interventions for alcohol problems: a review. Addiction. 1993, 88: 315-336. 10.1111/j.1360-0443.1993.tb00820.x.

    Article  CAS  PubMed  Google Scholar 

  5. Bertholet N, Daeppen JB, Wietlisbach V, Fleming M, Burnand B: Reduction of alcohol consumption by brief alcohol intervention in primary care: systematic review and meta-analysis. Arch Intern Med. 2005, 165: 986-995. 10.1001/archinte.165.9.986.

    Article  PubMed  Google Scholar 

  6. Kaner EF, Dickinson HO, Beyer F, Pienaar E, Schlesinger C, Campbell F, Saunders JB, Burnand B, Heather N: The effectiveness of brief alcohol interventions in primary care settings: a systematic review. Drug Alcohol Rev. 2009, 28: 301-323. 10.1111/j.1465-3362.2009.00071.x.

    Article  PubMed  Google Scholar 

  7. Nilsen P: Brief alcohol intervention-where to from here? Challenges remain for research and practice. Addiction. 2010, 105: 954-959. 10.1111/j.1360-0443.2009.02779.x.

    Article  PubMed  Google Scholar 

  8. Kypri K, McCambridge J, Cunningham JA, Vater T, Bowe S, De Graaf B, Saunders JB, Dean J: Web-based alcohol screening and brief intervention for Maori and non-Maori: the New Zealand e-SBINZ trials. BMC Publ Health. 2010, 10: 781-10.1186/1471-2458-10-781.

    Article  Google Scholar 

  9. Murray E, McCambridge J, Khadjesari Z, White IR, Thompson SG, Godfrey C, Linke S, Wallace P: The DYD-RCT protocol: an on-line randomised controlled trial of an interactive computer-based intervention compared with a standard information website to reduce alcohol consumption among hazardous drinkers. BMC Publ Health. 2007, 7: 306-10.1186/1471-2458-7-306.

    Article  Google Scholar 

  10. Rooke S, Thorsteinsson E, Karpin A, Copeland J, Allsop D: Computer-delivered interventions for alcohol and tobacco use: a meta-analysis. Addiction. 2010, 105: 1381-1390. 10.1111/j.1360-0443.2010.02975.x.

    Article  PubMed  Google Scholar 

  11. Khadjesari Z, Murray E, Hewitt C, Hartley S, Godfrey C: Can stand-alone computer-based interventions reduce alcohol consumption? A systematic review. Addiction. 2011, 106: 267-282. 10.1111/j.1360-0443.2010.03214.x.

    Article  PubMed  Google Scholar 

  12. Linke S, McCambridge J, Khadjesari Z, Wallace P, Murray E: Development of a psychologically enhanced interactive online intervention for hazardous drinking. Alcohol Alcohol. 2008, 43: 669-674.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Wallace P, Murray E, McCambridge J, Khadjesari Z, White IR, Thompson SG, Kalaitzaki E, Godfrey C, Linke S: On-line randomized controlled trial of an internet based psychologically enhanced intervention for people with hazardous alcohol consumption. PLoS One. 2011, 6: e14740-10.1371/journal.pone.0014740.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Murray E, Khadjesari Z, White IR, Kalaitzaki E, Godfrey C, McCambridge J, Thompson SG, Wallace P: Methodological challenges in online trials. J Med Internet Res. 2009, 11: e9-10.2196/jmir.1052.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Cunningham JA, Khadjesari Z, Bewick BM, Riper H: Internet-based interventions for problem drinkers: from efficacy trials to implementation. Drug Alcohol Rev. 2010, 29: 617-622. 10.1111/j.1465-3362.2010.00201.x.

    Article  PubMed  Google Scholar 

  16. Cunningham JA, Wild TC, Cordingley J, Van Mierlo T, Humphreys K: Twelve-month follow-up results from a randomized controlled trial of a brief personalized feedback intervention for problem drinkers. Alcohol Alcohol. 2010, 45: 258-262.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Dantzer C, Wardle J, Fuller R, Pampalone SZ, Steptoe A: International study of heavy drinking: attitudes and sociodemographic factors in university students. J Am Coll Health. 2006, 55: 83-89. 10.3200/JACH.55.2.83-90.

    Article  PubMed  Google Scholar 

  18. Lewis MA, Neighbors C, Oster-Aaland L, Kirkeby BS, Larimer ME: Indicated prevention for incoming freshmen: personalized normative feedback and high-risk drinking. Addict Behav. 2007, 32: 2495-2508. 10.1016/j.addbeh.2007.06.019.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Saitz R, Palfai TP, Freedner N, Winter MR, Macdonald A, Lu J, Ozonoff A, Rosenbloom DL, Dejong W: Screening and brief intervention online for college students: the ihealth study. Alcohol Alcohol. 2007, 42: 28-36.

    Article  PubMed  Google Scholar 

  20. Walters ST, Vader AM, Harris TR: A controlled trial of web-based feedback for heavy drinking college students. Prev Sci. 2007, 8: 83-88. 10.1007/s11121-006-0059-9.

    Article  PubMed  Google Scholar 

  21. Moreira MT, Smith LA, Foxcroft D: Social norms interventions to reduce alcohol misuse in university or college students. Cochrane Database Syst Rev. 2009, 3: CD006748-

    PubMed  Google Scholar 

  22. Carey KB, Scott-Sheldon LA, Elliott JC, Bolles JR, Carey MP: Computer-delivered interventions to reduce college student drinking: a meta-analysis. Addiction. 2009, 104: 1807-1819. 10.1111/j.1360-0443.2009.02691.x.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Cunningham JA, Kypri K, McCambridge J: The use of emerging technologies in alcohol treatment. Alcohol Res Health. 2011, 33: 320-326.

    PubMed  PubMed Central  Google Scholar 

  24. Kypri K, Saunders JB, Williams SM, McGee RO, Langley JD, Cashell-Smith ML, Gallagher SJ: Web-based screening and brief intervention for hazardous drinking: a double-blind randomized controlled trial. Addiction. 2004, 99: 1410-1417. 10.1111/j.1360-0443.2004.00847.x.

    Article  PubMed  Google Scholar 

  25. Kypri K, Langley JD, Saunders JB, Cashell-Smith ML, Herbison P: Randomized controlled trial of web-based alcohol screening and brief intervention in primary care. Arch Intern Med. 2008, 168: 530-536. 10.1001/archinternmed.2007.109.

    Article  PubMed  Google Scholar 

  26. Kypri K, McAnally HM: Randomized controlled trial of a web-based primary care intervention for multiple health risk behaviors. Prev Med. 2005, 41: 761-766. 10.1016/j.ypmed.2005.07.010.

    Article  PubMed  Google Scholar 

  27. Kypri K, Hallett J, Howat P, McManus A, Maycock B, Bowe S, Horton NJ: Randomized controlled trial of proactive web-based alcohol screening and brief intervention for university students. Arch Intern Med. 2009, 169: 1508-1514. 10.1001/archinternmed.2009.249.

    Article  PubMed  Google Scholar 

  28. Andersson A, Wirehn AB, Olvander C, Ekman DS, Bendtsen P: Alcohol use among university students in Sweden measured by an electronic screening instrument. BMC Publ Health. 2009, 9: 229-10.1186/1471-2458-9-229.

    Article  Google Scholar 

  29. Bendtsen P, Johansson K, Akerlind I: Feasibility of an email-based electronic screening and brief intervention (e-SBI) to college students in Sweden. Addict Behav. 2006, 31: 777-787. 10.1016/j.addbeh.2005.06.002.

    Article  PubMed  Google Scholar 

  30. Ekman DS, Andersson A, Nilsen P, Stahlbrandt H, Johansson AL, Bendtsen P: Electronic screening and brief intervention for risky drinking in Swedish university students-a randomized controlled trial. Addict Behav. 2011, 36: 654-659. 10.1016/j.addbeh.2011.01.015.

    Article  PubMed  Google Scholar 

  31. Bernstein JA, Bernstein E, Heeren TC: Mechanisms of change in control group drinking in clinical trials of brief alcohol intervention: implications for bias toward the null. Drug Alcohol Rev. 2010, 29: 498-507. 10.1111/j.1465-3362.2010.00174.x.

    Article  PubMed  Google Scholar 

  32. Clifford PR, Maisto SA: Subject reactivity effects and alcohol treatment outcome research. J Stud Alcohol. 2000, 61: 787-793.

    Article  CAS  PubMed  Google Scholar 

  33. McCambridge J: Research assessments: instruments of bias and brief interventions of the future?. Addiction. 2009, 104: 1311-1312.

    Article  PubMed  Google Scholar 

  34. Jenkins RJ, McAlaney J, McCambridge J: Change over time in alcohol consumption in control groups in brief intervention studies: systematic review and meta-regression study. Drug Alcohol Depend. 2009, 100: 107-114. 10.1016/j.drugalcdep.2008.09.016.

    Article  CAS  PubMed  Google Scholar 

  35. Jenkins RJ, McAlaney J, McCambridge J: Change over time in alcohol consumption in control groups in brief intervention studies: systematic review and meta-regression study (vol 100, pg 107, 2009). Drug Alcohol Depen. 2010, 108: 151-10.1016/j.drugalcdep.2009.11.007.

    Article  CAS  Google Scholar 

  36. Kypri K, Langley JD, Saunders JB, Cashell-Smith ML: Assessment may conceal therapeutic benefit: findings from a randomized controlled trial for hazardous drinking. Addiction. 2007, 102: 62-70. 10.1111/j.1360-0443.2006.01632.x.

    Article  PubMed  Google Scholar 

  37. McCambridge J, Day M: Randomized controlled trial of the effects of completing the Alcohol Use Disorders Identification Test questionnaire on self-reported hazardous drinking. Addiction. 2008, 103: 241-248. 10.1111/j.1360-0443.2007.02080.x.

    Article  PubMed  Google Scholar 

  38. Saunders JB, Aasland OG, Babor TF, de la Fuente JR, Grant M: Development of the alcohol use disorders identification test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption - II. Addiction. 1993, 88: 791-804. 10.1111/j.1360-0443.1993.tb02093.x.

    Article  CAS  PubMed  Google Scholar 

  39. McCambridge J, Kypri K, Elbourne DR: A surgical safety checklist. N Engl J Med. 2009, 360: 2373-2374.

    CAS  PubMed  Google Scholar 

  40. Babor TF, Steinberg K, Anton R, Del Boca F: Talk is cheap: measuring drinking outcomes in clinical trials. J Stud Alcohol. 2000, 61: 55-63.

    Article  CAS  PubMed  Google Scholar 

  41. Noknoy S, Rangsin R, Saengcharnchai P, Tantibhaedhyangkul U, McCambridge J: RCT of effectiveness of motivational enhancement therapy delivered by nurses for hazardous drinkers in primary care units in Thailand. Alcohol Alcohol. 2010, 45: 263-270.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Godin G, Sheeran P, Conner M, Germain M: Asking questions changes behavior: mere measurement effects on frequency of blood donation. Health Psychol. 2008, 27: 179-184.

    Article  PubMed  Google Scholar 

  43. Sandberg T, Conner M: A mere measurement effect for anticipated regret: impacts on cervical screening attendance. Br J Soc Psychol. 2009, 48: 221-236. 10.1348/014466608X347001.

    Article  PubMed  Google Scholar 

  44. Feil PH, Grauer JS, Gadbury-Amyot CC, Kula K, McCunniff DDS: Intentional use of the Hawthorne effect to improve oral hygiene compliance in orthodontic patients. J Dent Educ. 2002, 66: 1129-1135.

    PubMed  Google Scholar 

  45. McCambridge J, Kypri K: Can simply answering research questions change behaviour? Systematic review and meta analyses of brief alcohol intervention trials. PLoS One. 2011, 6: e23748-10.1371/journal.pone.0023748.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. McCambridge J, O'Donnell O, Godfrey C, Khadjesari Z, Linke S, Murray E, Wallace P: How big is the elephant in the room? Estimated and actual IT costs in an online behaviour change trial. BMC Res Notes. 2010, 3: 172-10.1186/1756-0500-3-172.

    Article  PubMed  PubMed Central  Google Scholar 

  47. McCambridge J, Butor-Bhavsar K, Witton J, Elbourne D: Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from Solomon 4-group studies. PLoS One. 2011, 6: e25223-10.1371/journal.pone.0025223.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  48. Kypri K, McCambridge J, Wilson A, Attia J, Sheeran P, Bowe S, Vater T: Effects of study design and allocation on participant behaviour - ESDA: study protocol for a randomized controlled trial. Trials. 2011, 12: 42-10.1186/1745-6215-12-42.

    Article  PubMed  PubMed Central  Google Scholar 

  49. McCambridge J, Kalaitzaki E, White IR, Khadjesari Z, Murray E, Linke S, Thompson SG, Godfrey C, Wallace P: Can differences in the length or relevance of questionnaires impact upon attrition in online trials? A randomised controlled trial. J Med Internet Res. in press

  50. Tomkins S, Allen E, Savenko O, McCambridge J, Saburova L, Kiryanov N, Oralov A, Gil A, Leon DA, McKee M, Elbourne D: The HIM (Health for Izhevsk men) trial protocol. BMC Health Serv Res. 2008, 8: 69-10.1186/1472-6963-8-69.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Zelen M: A new design for randomized clinical trials. N Engl J Med. 1979, 300: 1242-1245. 10.1056/NEJM197905313002203.

    Article  CAS  PubMed  Google Scholar 

  52. Adamson J, Cockayne S, Puffer S, Torgerson DJ: Review of randomised trials using the post-randomised consent (Zelen’s) design. Contemp Clin Trials. 2006, 27: 305-319. 10.1016/j.cct.2005.11.003.

    Article  PubMed  Google Scholar 

  53. Khadjesari Z, Murray E, Kalaitzaki E, White IR, McCambridge J, Thompson SG, Wallace P, Godfrey C: Impact and costs of incentives to reduce attrition in online trials: two randomized controlled trials. J Med Internet Res. 2011, 13: e26-10.2196/jmir.1523.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Pocock SJ: Clinical Trials: A Practical Approach. 1983, Wiley, Chicester, UK

    Google Scholar 

  55. White IR, Horton NJ, Carpenter J, Pocock SJ: Strategy for intention to treat analysis in randomised trials with missing outcome data. BMJ. 2011, 342: d40-10.1136/bmj.d40.

    Article  PubMed  PubMed Central  Google Scholar 

  56. McCambridge J, Thomas BA: Short forms of the AUDIT in a Web-based study of young drinkers. Drug Alcohol Rev. 2009, 28: 18-24. 10.1111/j.1465-3362.2008.00010.x.

    Article  PubMed  Google Scholar 

  57. Andréasson S, Allebeck P: Alkohol och Hälsa: en Kunskapsöversikt om Alkoholens Positiva och Negativa Effekter på vår Hälsa. Report 2005:11. 2005, The Swedish National Institute of Public Health, Stockholm, Sweden

    Google Scholar 

  58. Thomas BA, McCambridge J: Comparative psychometric study of a range of hazardous drinking measures administered online in a youth population. Drug Alcohol Depend. 2008, 96: 121-127. 10.1016/j.drugalcdep.2008.02.010.

    Article  PubMed  Google Scholar 

  59. British Psychological Society: Ethical Principles for Conducting Research with Human Participants. 2004, British Psychological Society, London, UK

    Google Scholar 

Download references

Acknowledgements

The study was funded by the Swedish Council for Working Life and Social Research (FAS, in Swedish; grant number 2010-0024) and by a Wellcome Trust Research Career Development fellowship in Basic Biomedical Science (WT086516MA) to JM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jim McCambridge.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JM and PB had the original idea for the study, obtained funding and led on its design supported by MB and PN. PB has overall responsibility for study implementation. MB does all computer programming associated both with interventions delivery and study data collection. JM wrote the first draft of the study protocol to which all authors contributed. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

McCambridge, J., Bendtsen, P., Bendtsen, M. et al. Alcohol email assessment and feedback study dismantling effectiveness for university students (AMADEUS-1): study protocol for a randomized controlled trial. Trials 13, 49 (2012). https://doi.org/10.1186/1745-6215-13-49

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1745-6215-13-49

Keywords