- Review
- Open access
- Published:
A bespoke rapid evidence review process engaging stakeholders for supporting evolving and time-sensitive policy and clinical decision-making: reflection and lessons learned from the Wales COVID-19 Evidence Centre 2021–2023
Health Research Policy and Systems volume 23, Article number: 36 (2025)
Abstract
Background
The COVID-19 pandemic presented policymakers with time-sensitive decision problems and a rapidly increasing volume of research, not all of which was robust, or relevant to local contexts. A bespoke evidence review process supporting stakeholder engagement was developed as part of the Wales COVID-19 Evidence Centre (WCEC), which could flexibly react to the needs of decision-makers, to address urgent requests within days or months as required.
Aims
To describe and appraise the WCEC review process and methods and identify key learning points.
Methods
Three types of rapid review products were used, which could accommodate the breadth of decision problems and topics covered. Stakeholder (including public) engagement was integrated from the onset and supported throughout. The methods used were tailored depending on the needs of the decision-maker, type of research question, timeframe, and volume and type of evidence. We appraised the overall process and compared the methods used with the most recent and relevant best practice guidance.
Results
The remote collaboration between research teams, establishing a clear pathway to impact upfront, and the strong stakeholder involvement embedded in the review process were considered particular strengths. Several key learning points were identified, which focused on: enhancing stakeholders’ abilities to identify focused policy-relevant research questions; the collection and storage of review protocols at a central location; tightening quality assurance process regarding study selection, data extraction and quality assessment; adequate reporting of methodological shortcuts and understanding by stakeholders; piloting of an algorithm for assigning study design descriptors, and a single quality assessment tool covering multiple study designs; and incorporate, where appropriate an assessment of the confidence in the overall body of evidence using GRADE or similar framework.
Conclusions
The review process enabled a high volume of questions that were directly relevant to policy and clinical decision making to be addressed in a timely manner using a transparent and tailored approach.
Background
Health- and care-related policy and practice decisions should be based on relevant and trustworthy research evidence, but this relies on providing policymakers and their advisors with timely and accessible evidence [1]. Effective communication and collaboration between researchers, topic experts and decision-makers are key elements in achieving impact from research. The coronavirus disease 2019 (COVID-19) pandemic demanded new ways of working between academics, policymakers and others making health and social care practice decisions to address time-sensitive decision problems within an ever-changing environment and evidence base. Identifying and synthesising the rapidly increasing volume of available research evidence, not all of which was robust or relevant to specific local contexts, was an important challenge.
Systematic reviews represent the gold standard for informing policy and practice as they provide a comprehensive, rigorous and transparent synthesis of the evidence. They use standardised and empirically tested methods to minimise bias and error. However, they can take years to complete. One alternative approach is a rapid review – an abbreviated systematic review, where processes are streamlined or omitted, to produce evidence for policy and decision-makers in a timely (and resource-efficient) manner [2]. However, even rapid reviews can take 6 months or more to complete [3, 4], whilst policy and practice decisions were needed within days or weeks during the pandemic. Further rapid evidence review products, that either modify or use alternative methods, have been developed. Hartling et al. [5] developed a taxonomy of these products, based on the extent of synthesis conducted (Box 1), which includes four categories: evidence inventories, rapid response briefs, rapid reviews and automated products.
Rapid evidence review products have demonstrated great utility for decision-makers, especially during the COVID-19 pandemic [6]. However, there are several key considerations in their development. Firstly, they are demand-driven and produced to support a specific decision by a particular end user [4, 5, 7]. This, and the timeframe of the decision problem, drives the choice of methods used [5]. Secondly, they require a continuous and close relationship with the end user, involving iterative feedback throughout the work [5], which is essential when restricting the scope of the review, to ensure the findings are directly relevant to decision-making [5, 7]. Thirdly, having a team that includes research staff experienced in systematic reviewing is critical for developing an expedited product [5]. Lastly, the COVID-19 pandemic, with its characteristic need for evidence to address rapidly evolving challenges, highlighted the need to avoid duplication across review groups.
The Wales COVID-19 Evidence Centre (WCEC) was established by the Welsh Government in March 2021 to enhance the use of research and evidence in managing the pandemic. It aimed to provide health and social care policy and practice decision-makers timely access to the latest relevant COVID-19 research evidence.
The purpose of this paper is to: (1) describe the bespoke evidence review process developed by the WCEC that takes account of the important considerations above, with the aim of supporting the agile and timely production of robust evidence reviews, whilst maintaining strong stakeholder engagement to ensure direct relevance to decision-making, and (2) appraise the overall review process and evidence review methods, their strengths and weaknesses, and identify further improvements that could be made.
Methods
The Wales COVID-19 Evidence Centre (WCEC)
The WCEC brought together a unique collaboration of established research groups within Wales with expertise in conducting rapid reviews, systematic reviews, health technology assessments, economic evaluations and the analysis of linked population-level routinely collected data. The WCEC operated through a core management team working closely (using videoconferencing) with the collaborating partner research teams (Box 2).
The WCEC undertook evidence reviews to address knowledge gaps and the specific needs of government, healthcare, public health and social care stakeholders in Wales. The evidence produced was designed to be of immediate use to decision-makers and to have a direct impact on decision-making, patient and client care, reducing inequalities and identifying future research needs. The work of the WCEC was delivered through four main processes: question prioritisation process, evidence review process, knowledge mobilisation process, and stakeholder engagement (including public involvement). This paper focuses on the evidence review process, and the stakeholder engagement that supports this. The processes for prioritising and setting research questions, and knowledge mobilisation, are described in more detail elsewhere [8, 9].
Development of the WCEC evidence review process
The WCEC sought to develop an evidence review process that could deliver robust reviews within 4–8 weeks, but with flexibility to provide decision-makers with a credible summary of the available evidence within days or weeks when needed. We considered the range of rapid evidence review products identified by Hartling et al. [5] (Box 1), but we were also mindful to avoid having too many types of outputs, as this could be confusing to stakeholders [11]. We developed a phased reviewing approach [12, 13] which utilises three types of rapid review products: a rapid response product (which is called a rapid evidence summary), an evidence inventory product (called a rapid evidence map), and a rapid review. These are described in more detail in Table 1.
Best practice framework
Our overall process and methods development were informed by guidance for conducting and reporting rapid evidence review products [7, 11,12,13,14,15,16,17,18]. The methods selected for our rapid reviews were adapted according to the topic area, type of review question, the extent of the evidence base, urgency of the questions, and the needs of the decision-makers. To support the collaborating partner review teams, a best practice framework (Table 2) was developed with recommendations from key sources for methodological shortcuts that could be applied at each stage of the rapid review.
Three key guidance documents were prioritised for developing the framework summarising the recommendations for best practice of conducting a rapid review [7, 13, 18]. We also referred to two existing guidance documents, developed and already used by two collaborating partners for conducting rapid reviews [11] or rapid health technology assessments [19].
The review process
The phased review process is outlined in Fig. 1 and described in more detail in the next section. Each review was conducted by a dedicated collaborating partner review team supported by the core management team. A continuous and close relationship with the decision-makers and relevant stakeholders (including public partnership group representation) was facilitated by three or more online stakeholder meetings.
Question prioritisation process
The review question(s) were submitted by stakeholders (e.g. policymakers/advisors, health and social care leads, public, academic/research groups) and prioritised during a formal consultation process, which is reported in detail elsewhere [9]. Urgent questions could also be submitted directly by policymakers or TAC/TAG members and fast-tracked onto the WCEC work programme. Key stakeholders, including those submitting the question and members of the public partnership group (PPG), provided expert (topic and methodological) input throughout the evidence review process. The overall review process and commitment required (including attendance at online meetings) was explained to the stakeholders submitting the question at the onset, and it was made clear that we were unable to take on questions where this stakeholder commitment was not feasible.
Review process phase I: rapid evidence summary (RES)
In phase I, the review question was allocated to an appropriate WCEC collaborating partner (review) team, and an introductory stakeholder meeting organised. This early phase comprised preliminary work to inform the rapid review work. However, it was adaptable to produce a final rapid response product (Table 1) within weeks if no rapid review was planned.
Introductory stakeholder meeting
The stakeholder meetings included members of the core management team and WCEC public partners, the review team and relevant stakeholders. The introductory meeting was used to confirm the decision problem or review question including key outcomes, clarify how the evidence would be used and confirm required timelines. It was also an opportunity for stakeholders to notify the review team of potentially seminal research or useful grey literature sources. Where an ill-defined decision problem/question had been submitted in the prioritisation process, this meeting also served to develop a structured review question.
Preliminary search of the literature
The review team then conducted a scoping search and a scan of key COVID-19 resources. This was supported by a tailor-made resources list, including both COVID-19 specific and generic registries and databases of secondary research (Supplementary Information, Additional file 1). This preliminary review of the literature enabled the reviewers to familiarise themselves with the topic area, check the research question has not been addressed by other groups or evidence centres, identify the extent and type of available evidence, and inform the methods and design of the rapid review in phase II (and develop the protocol). The searches focused on identifying robust secondary or tertiary research. Primary studies were considered if no relevant reviews were identified. The extent of the search was adapted according to whether this stage represented the final output or not.
Output from phase I
The output from this first phase was presented as an annotated bibliography with key findings, using a template to support the efficient and transparent reporting of what was done and found. When there was a high priority urgent decision to address, or insufficient evidence for a rapid review, the rapid evidence summary was published as the final output for the stakeholder. For example, our review of ozone machines and other disinfectant in schools (RES_23) [20].
If an up-to date, robust and directly relevant evidence review or clinical guideline was identified during the preliminary searches then a critical appraisal and summary of the review was conducted. For example, our review of vaccination in pregnant women (RES_24) [20]. If multiple systematic reviews were identified, then a review of existing reviews was considered for the subsequent phase rapid review. For example, in our review of innovations to support patients on elective surgical waiting lists (RR_30) [21] and our review of interventions to recruitment and retain clinical staff (RR_28) [22].
Intermediate stakeholder meeting
The findings of the initial phase (if progressing to a rapid review) were presented at a second, intermediate, stakeholder meeting. Collaborative discussions refined the review question, drafted eligibility criteria and decided on the overall reviewing approach to be used (if proceeding to rapid review). Stakeholders identified important contextual issues, known equality, or economic impacts for consideration in the proposed review.
Review process phase II: rapid review
Phase II comprised a rapid review (RR) of the evidence, usually completed within 1–2 months. This could be supplemented or substituted by a rapid evidence map (REM). The rapid review delivered a synthesis or meta-synthesis of the evidence, whilst the rapid evidence map provided a description of the available literature (Table 1). Both were based on a comprehensive search strategy and pre-defined protocol.
Rapid evidence map
For broad or complex review questions a rapid evidence map could be conducted, providing an inventory of the nature, characteristics and volume of available evidence for the particular policy domain or research question. The rapid evidence map was based on abbreviated systematic mapping [23] or scoping review [24] methodology, depending on the type of review question. For example, our review of recruitment and retention of NHS workers [20]. Stakeholders could also request a rapid evidence map as the intended final rapid product. For example, in our review of inequity experienced by the LGBTQ+ community [20].
Rapid review
Our rapid reviews used an adapted systematic review approach, with some review components abbreviated or omitted to generate the evidence to inform stakeholders within a short time frame, whilst maintaining attention to bias. We followed methodological recommendations and minimum standards for conducting rapid reviews [7, 13, 18]. The approach and decisions made on tailoring the rapid reviews were the responsibility of the individual review teams, according to the type of question, research volume and time frame, in discussion with core management team members and expert stakeholders.
Output from phase II
The template for our final rapid review and rapid evidence map reports are based on recommendations for reporting evidence reviews for decision-makers [11, 16]. This incorporates a two-page “top line summary”, the results and recommendations for practice presented up front, and the details of the methods used at the end of the report. The report also included a section of “additional information” where the input from the stakeholders was acknowledged and any conflicts of interest that the authors had was noted.
Our review reports were made available via a library on the WCEC website [20]. From May 2022, reports were published on a pre-print server and allocated a doi. Thus, reports could be identified readily in database searches, and other review teams could identify potential duplicate review questions early on. A short lay summary and the links to the pre-print server were included in the WCEC library. The ongoing WCEC work programmes, which included questions in progress, scheduled and completed, was also published on the website.
Knowledge mobilisation process – planning pathway to impact
Final stakeholder meeting
A final stakeholder meeting was used to present the findings of the review to the stakeholders, address any queries, identify the policy and practice implications, and support the development of a knowledge mobilisation plan.
Appraisal of the overall review process and rapid review methods
We appraised our overall approach and rapid review methods to reflect on our experience of implementing the WCEC review process and to identify key learning points.
We compared our methods and practice with the recommendations of Garritty et al. [7], Tricco et al. [13], Plüddemann et al. [18], Mann et al. [11], and Health Technology Wales [19], as the principal resources for our own best practice framework (Table 2). We also compared our rapid review methods with the array of methodological shortcuts recommended in published guidance developed or used across rapid review centres and organisations, as reviewed by Speckemeier et al. [25] (Table 3). That scoping review included guidance for any type of rapid evidence product with a completion time ranging from a day to over 6 months. The output included a table summarising the range of recommendations, or methodological shortcuts, provided in the guidance, and the frequency with which they were reported. However, the authors did not provide an indication of which recommendations were optimal.
The approach used for appraising our rapid review methods
We assessed whether our reviews, mainly completed within 2 months, aligned with our best practice framework, and whether methods aligned across our different collaborating partner groups. Findings were presented at a methods subgroup meeting and discussed to reflect on what worked well or could be improved (and how).
As part of this appraisal, key data from all rapid reviews and rapid evidence maps completed up until March 2023 were extracted. These included data on the search date, overall reviewing approach, limits applied, sources searched, volume of research identified, study selection process, data extraction process and approach used for quality assessment. An important consideration here is that the approach used depended on the research question being addressed, the volume and type of research available, and the timeframe within which the review was conducted.
Where the methods of individual reviews met or exceeded the recommendation in the best practice framework the text was highlighted green, for recommendations that were either partially or not always met the text was highlighted amber, and where our methods consistently did not meet the recommendation, the text was highlighted in red. We did not seek to identify individual failures or the frequency with which our methods did not meet the recommendations, but to reflect on our overall process and methodological approach used and identify what changes could be made. The colour coded Framework table was presented at a methods group meeting, and participants given a copy of the data extraction table summarising individual reviews.
Results
Results of the appraisal of our methods
The comparison of the methods used in our reviews with the recommendations in the best practice framework is presented in Table 2 as an additional column to the best practice framework. The full details of the methods used within our rapid reviews and rapid evidence maps are available in the Supplementary Information, Additional file 2. The comparison of our methods with the range of recommendations identified in the scoping review of guidance conducted by Speckemeier et al. [25] is presented in Table 3.
We identified that our basic methods align with or exceed most recommendations for rapid reviews, notably for developing and refining the review question, the use of preliminary work to inform the scope, the searches, synthesis and report production (Table 2). A potential gap was that, although our reviews are based on pre-defined protocols, which are developed in collaboration with the stakeholders, these are not registered. However, our protocols are made available on request, which is noted in the reports.
Study selection and data extraction were conducted by two independent reviewers in some reviews, but were more usually conducted by a single reviewer with or without verification of a sample or excluded citations/manuscripts. Quality assessment was based on critical appraisal or risk of bias tools specific to the study design(s), which agreed with most recommendations, but the assessment was often conducted by a single reviewer with or without a verification of a sample. The selection of literature, data extraction and critical appraisal by a single reviewer meets the minimum requirements only [18], and verification sample or the use of two independent reviewers is generally recommended to reduce bias [7, 13, 18]. The assessment of the confidence in the evidence base was generally subjective. The limited number of studies and diversity of outcomes reported in some reviews meant that the GRADE (Recommendations Assessment, Development and Evaluation) [26] assessment was applied to single studies. This was also the reason why some reviews did not include a GRADE assessment.
An important limitation identified in a minority of our earlier reviews is that the methodological shortcuts were not stated or clearly described. This is an important consideration for transparency and validity.
Reflection on our methods and reviewing approach and identification of key learning points
The output of the methods appraisal was shared with the review teams at a methods subgroup meeting. Members were also asked to reflect on their experience of the overall review process.
Aspects of the overall process that were thought to be working well included the stakeholder process for formulating relevant questions and the facilitation of the stakeholder meetings. The methodological discussions that ensued between the WCEC core team and the review team, on planning and conducting the proposed reviews, were also valued. These were felt to be beneficial for problem solving and learning from each other. The remote working and cross Wales collaboration were also considered a strength, as were the published reports and impact strategy. Establishing a clear pathway to impact was also key for refining the review question. Both these stages could be supported by a network of policy decision-makers with enhanced abilities in both question formulation and impact work.
Each review was completed by a dedicated collaborating partner team with a resource allocation equivalent to two full-time researchers plus some senior input time. Each collaborating partner had a slightly different set-up, and the resource allocation was subdivided among multiple reviewers in some teams. However, there was limited capacity to append additional personpower where the review needed to be completed over a shorter interval, or when the extent of the literature was larger than anticipated. Rather the overall process was designed to support restricting the scope of the review in close collaboration with the stakeholders, developing of an initial evidence map and tailoring the review methods. The duration of the review could, however, be extended by about a month where the stakeholder timeframe allowed this. The collaborating partners included established research groups with expertise in systematic reviews, scoping or mapping reviews, rapid reviews and economic evaluation. The researchers conducting or leading the reviews were experienced reviewers, but inexperienced researchers were also given the opportunity to get involved and develop new skills. The review teams were also supported by a structured overall process, the use of reporting templates and regular methods group meetings.
The administration of support, and people’s enthusiasm and commitment to the overall process, was paramount. For example, the timing between the preliminary and intermediate meeting was tight and was achieved utilising various approaches depending on the review team and stakeholder requirements. This included, for example, checking at the start with stakeholders that they could still commit to the overall process; setting up a doodle poll that covered sufficient dates to allow both meetings to be set up from the onset; asking for people’s availability for organising the second meeting as part of the first on-line meeting; or circulating a separate short doodle poll for individual meetings on the basis of the availability of key people. The optimum approach was generally selected after the initial conversations with the stakeholder(s), and the review team confirmed. However, the timing had to be extended in some reviews to account for additional requirements of the preliminary review or people’s limited availability (e.g. due to sickness).
In terms of our methods, members acknowledged potential discrepancies between reviewers in allocating study descriptors, in particular for poorly reported or less robust study designs. The algorithm developed by Leatherdale [27] for assessing natural experiments and to inform selection criteria was noted as a potential solution, requiring evaluation. The use of a single checklist for assessing the risk-of-bias covering multiple study designs (addressing the same type of question) was considered potentially beneficial. However, using the validated checklist developed for any non-randomised comparative study of interventions, ROBINs-I [28], was considered challenging within the context of a rapid review and mainly applicable to identifying bias in studies assessing causal effects of interventions. Likewise, GRADE works best for assessing the confidence in the overall body of evidence for interventions that have been evaluated by randomised trials and where there is at least one meta-analysis to provide a single estimate of the outcome effect [7]. Our reviews cover various forms of evidence, including intervention effects, prevalence, prognostic, diagnostic, economic, meaningfulness and consequence of public health measures. The use of GRADE in very rapid reviews, in particular non-intervention reviews, was considered challenging, even though it is recommended for use in emergency settings, such as the COVID-19 pandemic [29]. Members acknowledged that it should be included where possible. It was acknowledged that adhering to the minimum standards, such as regarding single reviewer screening of the literature or data extraction, could lead to bias or inaccuracies. The need to adequately report the methodological shortcuts used and the limitations of the review was also re-iterated. The potential value of more in-depth reviews, closer to systematic reviews in methodology (and including for example, network meta-analysis, meta-ethnography or economic modelling), and taking longer to complete when required, was identified. The learning points are summarised in Box 3.
Discussion
Summary of the practice and its appraisal
The Wales COVID-19 Evidence Centre developed a review process that could flexibly react to the needs of decision-makers, to address urgent requests within days, weeks or months as required. For each review, the approach used, and methodological shortcuts applied, were tailored depending on the needs of the decision-maker, timeframe, and volume and type of evidence. A best practice framework, which integrates recommendations in key published guidance, was developed to support reviewers at each stage of the reviews.
We appraised our overall process and methods used in 27 rapid reviews and five rapid evidence maps. Our methods aligned with or exceeded most recommendations for conducting rapid reviews, particularly those for developing and refining the review question, undertaking preliminary work to inform the scope, conducting the searches, quality assessment, narrative synthesis and report production. However, our review protocols were not registered, and study selection, data extraction and quality appraisal were generally conducted by a single reviewer, and the assessment of confidence in the evidence base was generally subjective.
The wider context of the literature
Several publications describe the rapid evidence review methods and overall process used in other centres [16, 31, 32]. The guidance and methods developed by these publications were also considered as part of a recent scoping review by Speckemeier et al. [25]. Our methods align with or exceeded the recommendations for methodological shortcuts most frequently reported in published guidance.
The trade-off in achieving speed and efficiency in conducting a rapid review is a reduction in the validity of the results and certainty in the evidence [25, 33]. However, empirical evidence of the impact of using specific methodological shortcuts is limited, and few shortcuts are used consistently in rapid reviews [4, 25, 33,34,35]. There is little consensus over which shortcuts could apply across different topic areas [4, 25, 33,34,35]. There is evidence showing that limiting the search strategy can increase the risk of selection, retrieval and publication bias [25]. The selection of literature and data extraction by a single reviewer can lead to relevant studies being missed and inaccuracies in data extraction [25, 33]. However, the extent of this impact varies depending on reviewer experience and research topic [25, 33, 36,37,38]. A crowd-based randomised trial [39] found that single-reviewer abstract screening missed on average 13% of relevant studies, and dual-reviewer screening missed 3% of relevant studies. It is important that the type and extent of the methodological shortcuts used are clearly reported, so that the extent of the potential bias and limitations of a review can be assessed.
The Cochrane Rapid Reviews Methods Group advocates that the essential element to success is early and ongoing engagement with the research requester to focus the rapid review and ensure that it is appropriate to the needs of stakeholders [7, 30, 33]. The stakeholder involvement process in our reviews was considered an important strength, facilitated by remote working and close collaboration between different research groups and organisations across Wales. A potential limitation of the appraisal of our methods is that we did not evaluate the views of the stakeholders’ and policy-makers involved in our reviews. Stakeholder satisfaction in our outputs, however, has been evaluated as part of our knowledge mobilisation process and impact assessment, which is reported separately [8].
Implications for future practice and research
Key learning points are summarised in Box 3. Our rapid review process was developed to support the need for urgent or rapid evidence needs during the COVID-19 pandemic. The same process could support rapid reviews with longer time frames (3–6 months) or more systematic reviews to support policy decision-making. The longer the available timeframe; the more systematic review approaches can be used and less methodological shortcuts are required.
Identifying a specific decision problem is an integral part of the review process. One of the key learning points identified was the need to enhance stakeholders’ abilities to identify focused policy-relevant research questions. The importance of stakeholders in developing and refining the review question, eligibility criteria and outcomes of interest were highlighted by all the key sources included in the best practice framework. Further research is needed to identify the most appropriate methods of engaging stakeholders early in the process to identify evidence needs and how these translate into focussed research questions.
A key limitation in our review process and an important area for further research is identifying, recording, and managing financial conflicts of interest that stakeholders may have. We are not aware of any of our stakeholders having any financial conflicts of interest to date, but we did not routinely collect this information. In going forward we will add an action at the start of each review, for example as part of the first stakeholders meeting, to request that stakeholders disclose any conflict of interest they may have. Our reporting template includes a section on conflicts of interest, but this relates to the authors, and not the stakeholders whose input is generally listed under the acknowledgements. We will look to update our reporting template to comply with the new Reporting Conflicts of Interest and Funding in Health Care Guidelines: The RIGHT-COI&F Checklist, when it is available [40]. An on-going systematic review of existing literature on conflict of interest issues when engaging with stakeholders (including public involvement) in healthcare guideline development, which is part of a wider research project undertaken by the Multistakeholder Engagement (MuSE) working group, will also help address the need for new guidance in this area [41, 42].
Further research is needed to assess the impact of using various methodological short cuts on the validity of rapid review findings. Such research can also provide the basis for minimum standards to minimise inaccuracies and bias, in particular for non-intervention reviews.
The quality (or risk of bias) assessment provides important information on the trustworthiness of the results of included studies. Recent methodological advances in the field of risk of bias assessment (which focuses on internal validity) advocate a move away from the use of critical appraisal tools that cover additional concepts such as imprecision, external validity and reporting [28, 43]. They also recommend that the assessment occurs at domain level, supported by signalling questions, rather than using a checklist approach. An example of which includes the ROBINS-I for non-randomised studies [28]. Existing reviews of quality assessment tools identified numerous tools that can be used in systematic or rapid reviews, but few are designed to cover multiple study designs [44,45,46,47] and there is no consensus on the most appropriate tools for rapid reviews [33]. Further work is needed to explore the use of a single tool that covers multiple study designs in rapid reviews of intervention effects [44]. Further work is also needed to develop the optimal approach for selecting appropriate study design descriptors, in the context of a rapid review, of real-world natural experiments or quasi-randomised controlled trials. This is likely to be particularly pertinent when conducting a rapid review of service delivery or public health interventions.
Guidance is required on how to assess the certainty or confidence in the overall body of evidence where the GRADE (or GRADE-CERQual [48]) assessment is difficult. Although it is recommended that assessing the certainty of evidence is based on GRADE for Cochrane rapid reviews of interventions [49], it is also acknowledged that it may not always be easy to implement within either the rapid review [7] or emergency preparedness [50] context.
Conclusions
Our bespoke review process enabled us to successfully address a high volume of review questions in a timely manner using a transparent and adaptable approach. The collaboration between established research teams in Wales and the strong stakeholder involvement embedded in the review process were considered particular strengths of the overall review process. A number of key learning points were identified, which focussed on: enhancing stakeholders’ abilities to identify focused policy-relevant research questions; the collection and storage of our review protocols at a central location; tightening our quality assurance process regarding study selection, data extraction and risk of bias assessment; the piloting of an algorithm for assigning study design descriptors; and to incorporate, where appropriate, an assessment of the confidence in the overall body of evidence using GRADE or GRADE-CERQual in our reviews.
Availability of data and materials
All data relevant to the study are included in the article or uploaded as supplementary materials.
Abbreviations
- COVID-19:
-
Coronavirus disease
- GRADE:
-
Grading of Recommendations Assessment, Development and Evaluation
- GRADE-CERQual:
-
GRADE-Confidence in the Evidence from Reviews of Qualitative research
- PPG:
-
Public Partnership Group
- REM:
-
Rapid evidence map
- RES:
-
Rapid evidence summary
- RR:
-
Rapid review
- SAGE:
-
Strategic Advisory Group of Experts on Immunization
- TAC:
-
Technical advisory cell
- TAG:
-
Technical advisory group
- WCEC:
-
Wales COVID-19 Evidence Centre
References
Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2.
Hamel C, Michaud A, Thuku M, Skidmore B, Stevens A, Nussbaumer-Streit B, Garritty C. Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. J Clin Epidemiol. 2021;129:74–85.
Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.
Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE. A scoping review of rapid review methods. BMC Med. 2015;13:224.
Hartling L, Guise JM, Kato E, Anderson J, Belinson S, Berliner E, Dryden DM, Featherstone R, Mitchell MD, Motu’apuaka M, et al. A taxonomy of rapid reviews links report types and methods to specific decision-making contexts. J Clin Epidemiol. 2015;68(12):1451-1462.e1453.
Tricco AC, Khalil H, Holly C, Feyissa G, Godfrey C, Evans C, Sawchuck D, Sudhakar M, Asahngwa C, Stannard D, et al. Rapid reviews and the methodological rigor of evidence synthesis: a JBI position statement. JBI Evid Synth. 2022;20(4):944–9.
Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, Affengruber L, Stevens A. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22.
Gal M, Cooper A, Doe E, Joseph-Williams N, Lewis R, Jane Law R-J, Anstey S, Davies N, Walters A, Greenwell J, et al. Knowledge mobilisation of rapid evidence reviews to inform health and social care policy and practice in a public health emergency: appraisal of the Wales COVID-19 Evidence Centre processes and impact, 2021–23. PLoS ONE. 2024;19(11): e0314461.
Joseph-Williams N, Cooper A, Lewis R, Greenwell G, Doe E, Gal M, Pearson N, Kumar R, Law R-J, Edwards A. Working with stakeholders to rapidly identify and prioritise COVID-19 health and social care evidence needs during the pandemic period: processes, results, and lessons from the Wales COVID-19 Evidence Centre. PREPRINT (Version 1) available at Research Square; 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.21203/rs.3.rs-3286253/v1.
Cooper A, Lewis R, Gal M, Doe E, Williams D, Strong A, Greenwell J, Watkins A, Law R-J, Joseph-Williams N, et al. Informing evidence-based policy during the COVID-19 pandemic and recovery period: Wales COVID-19 Evidence Centre. Global Health Res and Policy. 2024;9(1):18.
Mann M, Woodward A, Nelson A, Byrne A. Palliative Care Evidence Review Service (PaCERS): a knowledge transfer partnership. Health Res Policy Syst. 2019;17(1):100.
Lewis R, Hendry M, Din N, Stanciu MA, Nafees S, Hendry A, Teoh ZH, Lloyd T, Parsonage R, Neal RD, et al. Pragmatic methods for reviewing exceptionally large bodies of evidence: systematic mapping review and overview of systematic reviews using lung cancer survival as an exemplar. Syst Rev. 2019;8(1):171.
Tricco AC, Langlois EV, Straus SE: Rapid reviews to strengthen health policy and systems: a practical guide. World Health Organization. https://apps.who.int/iris/handle/10665/258698 License: CC BY-NC-SA 30 IGO ISBN 978-92-4-151276-3; 2017.
Collins AM, Coughlin D, Miller J, Kirk S. The production of quick scoping reviews and rapid evidence assessments: a how to guide; 2015. https://nora.nerc.ac.uk/id/eprint/512448.
Dobbins M. Rapid review guidebook. Hamilton: National Collaborating Centre for Methods and Tools; 2017.
Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.
National Academies of Sciences E, and Medicine. Evidence-Based practice for public health emergency preparedness and response. Washington, DC: National Academies Press (US); 2020.
Plüddemann A, Aronson JK, Onakpoya I, Heneghan C, Mahtani KR. Redefining rapid reviews: a flexible framework for restricted systematic reviews. BMJ Evid Based Med. 2018;23(6):201–3.
Wales COVID-19 Evidence Centre report library. https://healthandcareresearchwales.org/wales-covid-19-evidence-centre-report-library.
Okolie C, Rodriguez R, Wale A, Hookway A, Shaw H, Cooper A, Lewis R, Law R-J, M. G, Greenwell J et al. A rapid review of the effectiveness of innovations to support patients on elective surgical waiting lists. medRxiv. 2022:2022.2006.2010.22276151.
Edwards D, Csontos J, Gillen E, Carrier J, Lewis R, Cooper A, Gal M, Law R-J, Greenwell G, Edwards A. A rapid review of the effectiveness of interventions and innovations relevant to the Welsh NHS context to support recruitment and retention of clinical staff. medRxiv. 2022:2022.2005.2011.22274903.
White H, Albers B, Gaarder M, Kornør H, Littell J, Marshall Z, Mathew C, Pigott T, Snilstveit B, Waddington H, et al. Guidance for producing a Campbell evidence and gap map. Campbell Syst Rev. 2020;16(4): e1125.
Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, McInerney P, Godfrey CM, Khalil H. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119–26.
Speckemeier C, Niemann A, Wasem J, Buchberger B, Neusser S. Methodological guidance for rapid reviews in healthcare: a scoping review. Res Synth Methods. 2022;13(4):394–404.
Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.
Leatherdale ST. Natural experiment methodology for research: a review of how different methods can support real-world research. Int J Soc Res Methodol. 2019;22(1):19–35.
Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355: i4919.
Schünemann HJ, Santesso N, Vist GE, Cuello C, Lotfi T, Flottorp S, Davoli M, Mustafa R, Meerpohl JJ, Alonso-Coello P, et al. Using GRADE in situations of emergencies and urgencies: certainty in evidence and recommendations matters during the COVID-19 pandemic, now more than ever and no matter what. J Clin Epidemiol. 2020;127:202–7.
Garritty C, Hamel C, Trivella M, Gartlehner G, Nussbaumer-Streit B, Devane D, Kamel C, Griebler U, King VJ. Updated recommendations for the Cochrane rapid review methods guidance for rapid reviews of effectiveness. BMJ. 2024;384: e076335.
Chambers D, Booth A, Rodgers M, Preston L, Dalton J, Goyder E, Thomas S, Parker G, Street A, Eastwood A. Evidence to support delivery of effective health services: a responsive programme of rapid evidence synthesis. Evidence & Policy. 2021;17(1):173–87.
Neil-Sztramko SE, Belita E, Traynor RL, Clark E, Hagerman L, Dobbins M. Methods to support evidence-informed decision-making in the midst of COVID-19: creation and evolution of a rapid review service from the National Collaborating Centre for Methods and Tools. BMC Med Res Methodol. 2021;21(1):231.
King VJ, Stevens A, Nussbaumer-Streit B, Kamel C, Garritty C. Paper 2: performing rapid reviews. Syst Rev. 2022;11(1):151.
Abou-Setta AM, Jeyaraman MM, Attia A, Al-Inany HG, Ferri M, Ansari MT, Garritty CM, Bond K, Norris SL. Methods for developing evidence reviews in short periods of time: a scoping review. PLoS ONE. 2016;11(12): e0165903.
Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN. What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review. Health Res Policy Syst. 2016;14(1):83.
Affengruber L, Wagner G, Waffenschmidt S, Lhachimi SK, Nussbaumer-Streit B, Thaler K, Griebler U, Klerings I, Gartlehner G. Combining abbreviated literature searches with single-reviewer screening: three case studies of rapid reviews. Syst Rev. 2020;9(1):162.
Taylor-Phillips S, Geppert J, Stinton C, Freeman K, Johnson S, Fraser H, Sutcliffe P, Clarke A. Comparison of a full systematic review versus rapid review approaches to assess a newborn screening test for tyrosinemia type 1. Res Synth Methods. 2017;8(4):475–84.
Waffenschmidt S, Knelangen M, Sieben W, Bühn S, Pieper D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol. 2019;19(1):132.
Gartlehner G, Affengruber L, Titscher V, Noel-Storr A, Dooley G, Ballarini N, König F. Single-reviewer abstract screening missed 13 percent of relevant studies: a crowd-based, randomized controlled trial. J Clin Epidemiol. 2020;121:20–8.
Xun Y, Estill J, Ren M, Wang P, Yang N, Wang Z, Zhu Y, Su R, Chen Y, Akl EA. Developing the RIGHT-COI&F extension for the reporting conflicts of interest and funding in practice guidelines: study protocol. Ann Transl Med. 2022;10(12):717.
Khabsa J, Petkovic J, Riddle A, Lytvyn L, Magwood O, Atwere P, Campbell P, Katikireddi SV, Merner B, Nasser M, et al. PROTOCOL: Conflict of interest issues when engaging stakeholders in health and healthcare guideline development: a systematic review. Campbell Syst Rev. 2022;18(2): e1232.
Petkovic J, Magwood O, Lytvyn L, Khabsa J, Concannon TW, Welch V, Todhunter-Brown A, Palm ME, Akl EA, Mbuagbaw L, et al. Key issues for stakeholder engagement in the development of health and healthcare guidelines. Res Involv Engagem. 2023;9(1):27.
Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366: i4898.
Duval D, Pearce-Smith N, Palmer JC, Sarfo-Annin JK, Rudd P, Clark R. Critical appraisal in rapid systematic reviews of COVID-19 studies: implementation of the Quality Criteria Checklist (QCC). Syst Rev. 2023;12(1):55.
Ma L-L, Wang Y-Y, Yang Z-H, Huang D, Weng H, Zeng X-T. Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better? Mil Med Res. 2020;7(1):7.
Quigley JM, Thompson JC, Halfpenny NJ, Scott DA. Critical appraisal of nonrandomized studies-a review of recommended and commonly used tools. J Eval Clin Pract. 2019;25(1):44–52.
Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, Niu Y, Du L. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8(1):2–10.
Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, Bohren MA, Tunçalp Ö, Colvin CJ, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(Suppl 1):2.
Gartlehner G, Nussbaumer-Streit B, Devane D, Kahwati L, Viswanathan M, King VJ, Qaseem A, Akl E, Schuenemann HJ. Rapid reviews methods series: guidance on assessing the certainty of evidence. BMJ Evid Based Med. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmjebm-2022-112111.
Calonge N, Shekelle PG, Owens DK, Teutsch S, Downey A, Brown L, Noyes J. A framework for synthesizing intervention evidence from multiple sources into a single certainty of evidence rating: methodological developments from a US National Academies of Sciences, Engineering, and Medicine Committee. Res Synth Methods. 2023;14(1):36–51.
Acknowledgements
We are grateful for the engagement and support of the various stakeholders and members of our public partnership group in delivering our rapid evidence review work.
Funding
The Wales COVID-19 Evidence Centre is funded by Health and Care Research Wales through the Welsh Government.
Author information
Authors and Affiliations
Contributions
Conceptualization: RL; data curation and analysis: R.L. and D.J.; investigation: R.L., A.C., D.J., M.M., D.E., J.C., H.S., T.W., L.H.S., J.N., H.M., M.G., and R-J.L.; methodology: R.L., A.C., D.J., M.M., D.E., J.C., H.S., J.N., H.M., J.W., E.H., R-J.L.; project administration: R.L.; supervision: R.L., A.C., N.J-W., and A.E.; writing – original draft: R.L.; writing – review & editing: R.L., A.C., D.J., M.M., D.E., J.C., H.S., T.W., L.H.S., J.N., H.M., J.W., E.H., M.G., E.D., R-J.L., N.J-W., and A.E.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
In line with guidance, ethical approval was not required for this study.
Consent for publication
Our manuscript does not contain personal data and does not require consent for publication.
Competing interests
R-J.L. is employed by the Welsh Government. The authors have no other conflicts of interest to declare.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lewis, R., Cooper, A., Jarrom, D. et al. A bespoke rapid evidence review process engaging stakeholders for supporting evolving and time-sensitive policy and clinical decision-making: reflection and lessons learned from the Wales COVID-19 Evidence Centre 2021–2023. Health Res Policy Sys 23, 36 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12961-025-01297-w
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12961-025-01297-w