An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
Prabhakar veginadu.
1 Department of Rural Clinical Sciences, La Trobe Rural Health School, La Trobe University, Bendigo Victoria, Australia
2 Lincoln International Institute for Rural Health, University of Lincoln, Brayford Pool, Lincoln UK
3 Department of Orthodontics, Saveetha Dental College, Chennai Tamil Nadu, India
Associated data.
APPENDIX B: List of excluded studies with detailed reasons for exclusion
APPENDIX C: Quality assessment of included reviews using AMSTAR 2
The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.
The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.
The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.
The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.
Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7
The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5
Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.
An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3
Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.
SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14
The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.
The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.
Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.
Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).
The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.
To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.
From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure 1 . A list of excluded studies with reasons can be found in Appendix B .
Study selection flowchart
Table 1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23
Characteristics of included studies
Author, year | Search strategy (year last searched; no. databases; supplementary searches) | SR design (type of review; no. of studies included) | Topic; subject area | SR objectives | SR authors’ comments on study quality |
---|---|---|---|---|---|
Crumley, 2005 | 2004; Seven databases; four journals handsearched, reference lists and contacting authors | SR; = 64 | RCTs and CCTs; not specified | To identify and quantitatively review studies comparing two or more different resources (e.g., databases, Internet, handsearching) used to identify RCTs and CCTs for systematic reviews. | Most of the studies adequately described reproducible search methods, expected search yield. Poor quality in studies was mainly due to lack of rigor in reporting selection methodology. Majority of the studies did not indicate the number of people involved in independently screening the searches or applying eligibility criteria to identify potentially relevant studies. |
Hopewell, 2007 | 2002; eight databases; selected journals and published abstracts handsearched, and contacting authors | SR and MA; = 34 (34 in quantitative analysis) | RCTs; health care | To review systematically empirical studies, which have compared the results of handsearching with the results of searching one or more electronic databases to identify reports of randomized trials. | The electronic search was designed and carried out appropriately in majority of the studies, while the appropriateness of handsearching was unclear in half the studies because of limited information. The screening studies methods used in both groups were comparable in most of the studies. |
Hopewell, 2007 | 2005; two databases; selected journals and published abstracts handsearched, reference lists, citations and contacting authors | SR and MA; = 5 (5 in quantitative analysis) | RCTs; health care | To review systematically research studies, which have investigated the impact of gray literature in meta‐analyses of randomized trials of health care interventions. | In majority of the studies, electronic searches were designed and conducted appropriately, and the selection of studies for eligibility was similar for handsearching and database searching. Insufficient data for most studies to assess the appropriateness of handsearching and investigator agreeability on the eligibility of the trial reports. |
Horsley, 2011 | 2008; three databases; reference lists, citations and contacting authors | SR; = 12 | Any topic or study area | To investigate the effectiveness of checking reference lists for the identification of additional, relevant studies for systematic reviews. Effectiveness is defined as the proportion of relevant studies identified by review authors solely by checking reference lists. | Interpretability and generalizability of included studies was difficult. Extensive heterogeneity among the studies in the number and type of databases used. Lack of control in majority of the studies related to the quality and comprehensiveness of searching. |
Morrison, 2012 | 2011; six databases and gray literature | SR; = 5 | RCTs; conventional medicine | To examine the impact of English language restriction on systematic review‐based meta‐analyses | The included studies were assessed to have good reporting quality and validity of results. Methodological issues were mainly noted in the areas of sample power calculation and distribution of confounders. |
Robson, 2019 | 2016; three databases; reference lists and contacting authors | SR; = 37 | N/R | To identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews. | The quality of the included studies was generally low. Only one study was assessed as having low RoB across all four domains. Majority of the studies were assessed to having unclear RoB across one or more domains. |
Schmucker, 2017 | 2016; four databases; reference lists | SR; = 10 | Study data; medicine | To assess whether the inclusion of data that were not published at all and/or published only in the gray literature influences pooled effect estimates in meta‐analyses and leads to different interpretation. | Majority of the included studies could not be judged on the adequacy of matching or adjusting for confounders of the gray/unpublished data in comparison to published data. |
Also, generalizability of results was low or unclear in four research projects | |||||
Morissette, 2011 | 2009; five databases; reference lists and contacting authors | SR and MA; = 6 (5 included in quantitative analysis) | N/R | To determine whether blinded versus unblinded assessments of risk of bias result in similar or systematically different assessments in studies included in a systematic review. | Four studies had unclear risk of bias, while two studies had high risk of bias. |
O'Mara‐Eves, 2015 | 2013; 14 databases and gray literature | SR; = 44 | N/R | To gather and present the available research evidence on existing methods for text mining related to the title and abstract screening stage in a systematic review, including the performance metrics used to evaluate these technologies. | Quality appraised based on two criteria‐sampling of test cases and adequacy of methods description for replication. No study was excluded based on the quality (author contact). |
SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.
The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n = 3), literature search ( n = 3), study selection ( n = 2), data extraction ( n = 1), and RoB assessment ( n = 2) (Table 2 ).
Summary of findings from review evaluating systematic review methods
Key elements | Author, year | Method assessed | Evaluations/outcomes (P—primary; S—secondary) | Summary of SR authors’ conclusions | Quality of review |
---|---|---|---|---|---|
Excluding study data based on publication status | Hopewell, 2007 | Gray vs. published literature | Pooled effect estimate | Published trials are usually larger and show an overall greater treatment effect than gray trials. Excluding trials reported in gray literature from SRs and MAs may exaggerate the results. | Moderate |
Schmucker, 2017 | Gray and/or unpublished vs. published literature | P: Pooled effect estimate | Excluding unpublished trials had no or only a small effect on the pooled estimates of treatment effects. Insufficient evidence to conclude the impact of including unpublished or gray study data on MA conclusions. | Moderate | |
S: Impact on interpretation of MA | |||||
Excluding study data based on language of publication | Morrison, 2012 | English language vs. non‐English language publications | P: Bias in summary treatment effects | No evidence of a systematic bias from the use of English language restrictions in systematic review‐based meta‐analyses in conventional medicine. Conflicting results on the methodological and reporting quality of English and non‐English language RCTs. Further research required. | Low |
S: number of included studies and patients, methodological quality and statistical heterogeneity | |||||
Resources searching | Crumley, 2005 | Two or more resources searching vs. resource‐specific searching | Recall and precision | Multiple‐source comprehensive searches are necessary to identify all RCTs for a systematic review. For electronic databases, using the Cochrane HSS or complex search strategy in consultation with a librarian is recommended. | Critically low |
Supplementary searching | Hopewell, 2007 | Handsearching only vs. one or more electronic database(s) searching | Number of identified randomized trials | Handsearching is important for identifying trial reports for inclusion in systematic reviews of health care interventions published in nonindexed journals. Where time and resources are limited, majority of the full English‐language trial reports can be identified using a complex search or the Cochrane HSS. | Moderate |
Horsley, 2011 | Checking reference list (no comparison) | P: additional yield of checking reference lists | There is some evidence to support the use of checking reference lists to complement literature search in systematic reviews. | Low | |
S: additional yield by publication type, study design or both and data pertaining to costs | |||||
Reviewer characteristics | Robson, 2019 | Single vs. double reviewer screening | P: Accuracy, reliability, or efficiency of a method | Using two reviewers for screening is recommended. If resources are limited, one reviewer can screen, and other reviewer can verify the list of excluded studies. | Low |
S: factors affecting accuracy or reliability of a method | |||||
Experienced vs. inexperienced reviewers for screening | Screening must be performed by experienced reviewers | ||||
Screening by blinded vs. unblinded reviewers | Authors do not recommend blinding of reviewers during screening as the blinding process was time‐consuming and had little impact on the results of MA | ||||
Use of technology for study selection | Robson, 2019 | Use of dual computer monitors vs. nonuse of dual monitors for screening | P: Accuracy, reliability, or efficiency of a method | There are no significant differences in the time spent on abstract or full‐text screening with the use and nonuse of dual monitors | Low |
S: factors affecting accuracy or reliability of a method | |||||
Use of Google translate to translate non‐English citations to facilitate screening | Use of Google translate to screen German language citations | ||||
O'Mara‐Eves, 2015 | Use of text mining for title and abstract screening | Any evaluation concerning workload reduction | Text mining approaches can be used to reduce the number of studies to be screened, increase the rate of screening, improve the workflow with screening prioritization, and replace the second reviewer. The evaluated approaches reported saving a workload of between 30% and 70% | Critically low | |
Order of screening | Robson, 2019 | Title‐first screening vs. title‐and‐abstract simultaneous screening | P: Accuracy, reliability, or efficiency of a method | Title‐first screening showed no substantial gain in time when compared to simultaneous title and abstract screening. | Low |
S: factors affecting accuracy or reliability of a method | |||||
Reviewer characteristics | Robson, 2019 | Single vs. double reviewer data extraction | P: Accuracy, reliability, or efficiency of a method | Use two reviewers for data extraction. Single reviewer data extraction followed by the verification of outcome data by a second reviewer (where statistical analysis is planned), if resources preclude | Low |
S: factors affecting accuracy or reliability of a method | |||||
Experienced vs. inexperienced reviewers for data extraction | Experienced reviewers must be used for extracting continuous outcomes data | ||||
Data extraction by blinded vs. unblinded reviewers | Authors do not recommend blinding of reviewers during data extraction as it had no impact on the results of MA | ||||
Use of technology for data extraction | Use of dual computer monitors vs. nonuse of dual monitors for data extraction | Using two computer monitors may improve the efficiency of data extraction | |||
Data extraction by two English reviewers using Google translate vs. data extraction by two reviewers fluent in respective languages | Google translate provides limited accuracy for data extraction | ||||
Computer‐assisted vs. double reviewer extraction of graphical data | Use of computer‐assisted programs to extract graphical data | ||||
Obtaining additional data | Contacting study authors for additional data | Recommend contacting authors for obtaining additional relevant data | |||
Reviewer characteristics | Robson, 2019 | Quality appraisal by blinded vs. unblinded reviewers | P: Accuracy, reliability, or efficiency of a method | Inconsistent results on RoB assessments performed by blinded and unblinded reviewers. Blinding reviewers for quality appraisal not recommended | Low |
S: factors affecting accuracy or reliability of a method | |||||
Morissette, 2011 | Risk of bias (RoB) assessment by blinded vs. unblinded reviewers | P: Mean difference and 95% confidence interval between RoB assessment scores | Findings related to the difference between blinded and unblinded RoB assessments are inconsistent from the studies. Pooled effects show no differences in RoB assessments for assessments completed in a blinded or unblinded manner. | Moderate | |
S: qualitative level of agreement, mean RoB scores and measures of variance for the results of the RoB assessments, and inter‐rater reliability between blinded and unblinded reviewers | |||||
Robson, 2019 | Experienced vs. inexperienced reviewers for quality appraisal | P: Accuracy, reliability, or efficiency of a method | Reviewers performing quality appraisal must be trained. Quality assessment tool must be pilot tested. | Low | |
S: factors affecting accuracy or reliability of a method | |||||
Use of additional guidance vs. nonuse of additional guidance for quality appraisal | Providing guidance and decision rules for quality appraisal improved the inter‐rater reliability in RoB assessments. | ||||
Obtaining additional data | Contacting study authors for obtaining additional information/use of supplementary information available in the published trials vs. no additional information for quality appraisal | Additional data related to study quality obtained by contacting study authors improved the quality assessment. | |||
RoB assessment of qualitative studies | Structured vs. unstructured appraisal of qualitative research studies | Use of structured tool if qualitative and quantitative studies designs are included in the review. For qualitative reviews, either structured or unstructured quality appraisal tool can be used. |
There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n = 4), Hopewell et al. 20 and Crumley et al. 19 ( n = 30), and Robson et al. 14 and Morissette et al. 23 ( n = 4). There were no conflicting results between any of the identified SRs on the same topic.
Overall, the quality of the included reviews was assessed as moderate at best (Table 2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .
3.3.1. methods for defining review scope and eligibility.
Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26
Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”
Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19
In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20
The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22
Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.
Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14
Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.
As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.
Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23
Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.
4.1. summary of findings.
Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.
The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27
Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.
Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.
Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14
One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.
The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.
To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.
There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.
Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.
This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.
The authors declare no conflicts of interest.
APPENDIX A: Detailed search strategies
The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.
Open Access Funding provided by La Trobe University.
Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews . J Evid Based Med . 2022; 15 :39–54. 10.1111/jebm.12468 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
A Systematic Literature Review (SLR) template is a structured framework used for conducting and documenting a systematic review of existing research studies on a specific topic or research question. Systematic literature reviews are commonly used in academic and research settings to provide a comprehensive and unbiased summary of the available literature on a particular subject. Here's a template for conducting a systematic literature review:
Systematic Literature Review (SLR) Template
Provide a clear and descriptive title for your systematic literature review.
2. Objective:
State the main research question or objectives of the systematic literature review.
3. Inclusion and Exclusion Criteria:
Define the criteria for selecting and excluding studies. This may include criteria related to publication date, study design, geographic location, and relevance to the research question.
4. Search Strategy:
Describe the search strategy used to identify relevant studies, including databases searched, search terms, and any filters or limits applied.
5. Study Selection Process:
Outline the process for screening and selecting studies, including how duplicates were handled and the number of reviewers involved.
6. Data Extraction:
Specify the data extraction process, including the data items collected from each selected study (e.g., author, publication year, study design, key findings).
7. Quality Assessment:
Explain how the quality or risk of bias of the included studies was assessed (e.g., using quality assessment tools or scales).
8. Data Synthesis:
Describe how the data from the selected studies were synthesized and analyzed. This may include narrative synthesis, meta-analysis, or thematic analysis.
9. Results:
Present the main findings of the systematic literature review, including key themes, trends, and conclusions drawn from the included studies.
10. Discussion: - Interpret the results in the context of the research question and objectives. Discuss the implications of the findings and any limitations of the review.
11. Conclusion: - Summarize the main contributions of the systematic literature review and provide recommendations for future research or practice.
12. References: - List all the studies included in the systematic literature review following a consistent citation style (e.g., APA, MLA).
13. Appendices: - Include any supplementary materials, such as flowcharts of the study selection process or data extraction forms.
14. Acknowledgments: - If applicable, acknowledge individuals or organizations that provided support or assistance during the review process.
This template can serve as a guide for conducting and documenting a systematic literature review in a structured and transparent manner. Adapt it to your specific research topic and requirements, and ensure that your systematic literature review adheres to established guidelines and best practices in the field.
Last updated 1 year ago
The UT Southwestern Librarians provide two levels of Evidence Synthesis/Systematic Review (ES/SR) support.
Systematic Review – seeks to systematically search for, appraise and synthesize research evidence on a specific question, often adhering to guidelines on the conduct of a review.
Meta-analysis – a technique that statistically combines the results of quantitative studies to provide a more precise effect of the results. A good systematic review is essential to a meta-analysis of the literature.
Standards (see the Books tab) and guidelines have been developed on how to conduct and report systematic reviews and meta analyses.
The Cochrane Library includes:
3 Should I undertake a scoping review or a systematic review? (Ask JBI) on YouTube (12:43).
384 other terms for systematic review - words and phrases with similar meaning.
Preliminary searching, creating a search strategy, identifying synonyms & related terms, keywords vs. index terms, combining search terms using boolean operators, a sr search strategy, search limits.
Depending on your topic, you may be able to save time in constructing your search by using specific search filters (also called "hedges") developed & validated by researchers. Validated filters include:
In many literature reviews, you try to balance the sensitivity of the search (how many potentially relevant articles you find) and specificit y (how many definitely relevant articles you find ), realizing that you will miss some. In an evidence synthesis, you want a very sensitive search: you are trying to find all potentially relevant articles. An evidence synthesis search will:
PICO is a good framework to help clarify your systematic review question.
P - Patient, Population or Problem: What are the important characteristics of the patients &/or problem?
I - Intervention: What you plan to do for the patient or problem?
C - Comparison: What, if anything, is the alternative to the intervention?
O - Outcome: What is the outcome that you would like to measure?
Beyond PICO: the SPIDER tool for qualitative evidence synthesis.
5-SPICE: the application of an original framework for community health worker program design, quality improvement and research agenda setting.
Conducting preliminary, exploratory searching is an important part of any literature review. While there is often a desire to quickly begin crafting a final search for a review question, spending time on preliminary searches is crucial for your search.
Sentinel articles
A well constructed search strategy is the core of your evidence synthesis and will be reported on in the methods section of your paper. The search strategy retrieves the majority of the studies you will assess for eligibility & inclusion. The quality of the search strategy also affects what items may have been missed. Informationists can be partners in this process.
For an evidence synthesis, it is important to broaden your search to maximize the retrieval of relevant results.
Use keywords: How other people might describe a topic?
Identify the appropriate index terms (subject headings) for your topic.
Include spelling variations (e.g., behavior , behaviour).
Both types of search terms are useful & both should be used in your search.
Keywords help to broaden your results. They will be searched for at least in journal titles, author names, article titles, & article abstracts. They can also be tagged to search all text.
Index/subject terms help to focus your search appropriately, looking for items that have had a specific term applied by an indexer.
Boolean operators let you combine search terms in specific ways to broaden or narrow your results.
An example of a search string for one concept in a systematic review.
In this example from a PubMed search, [mh] = MeSH & [tiab] = Title/Abstract, a more focused version of a keyword search.
A typical database search limit allows you to narrow results so that you retrieve articles that are most relevant to your research question. Limit types vary by database & include:
In an evidence synthesis search, you should use care when applying limits, as you may lose articles inadvertently. For more information, see, Chapter 4: Searching for and selecting studies of the Cochrane Handbook particularly regarding language & format limits in Section 4.4.5 .
Advertisement
44 Accesses
Explore all metrics
Over the years, the concept of green product development (GPD) has received immense interest from scholars and practitioners around the globe. This study presents a state-of-the-art research profile and an in-depth review of the literature on GPD. Additionally, this review also critically evaluates existing literature to highlight the gaps and proposes an agenda for future development in the body of knowledge. We utilized the systematic literature review (SLR) technique to identify, select, and review existing research on GPD. In doing so, we employed a hybrid analysis technique: (i) quantitative bibliometric analyses, and (ii) qualitative literature synthesis. Our findings are presented in three main sections, research profiling, literature synthesis, and future research agenda. The research profile of existing literature on GPD is presented in terms of annual scientific production, most relevant journals, affiliations, countries, word-cloud analysis, and thematic mapping. Afterward, we discussed various drivers and barriers of GPD as well as also presented the findings of prior studies regarding the outcomes of GPD at the micro and macro levels. A key contribution of this review is the development of a new framework, Circular Product Development (CPD), for moving forward the academic debate from GPD towards CPD.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Explore related subjects.
This study utilizes the secondary data obtained from the scientific databases Scopus and Web of Science generated through the specified keywords. The data used in this study are available from the authors upon reasonable request.
Abdelfattah, F., Salah, M., Dahleez, K., Darwazeh, R., & Halbusi, A., H (2024). The future of competitive advantage in Oman: Integrating green product innovation, AI, and intellectual capital in business strategies. International Journal of Innovation Studies , 8 (2), 154–171.
Article Google Scholar
Abdulrahman, M. D. A., & Subramanian, N. (2023). Green product development framework: Empirical evidence from Chinese automotive supply chains . An International Journal.
Al Amin, M., Ahad Mia, M. A., Bala, T., Iqbal, M. M., & Alam, M. S. (2023). Green finance continuance behavior: The role of satisfaction, social supports, environmental consciousness, green bank marketing initiatives and psychological reactance. Management of Environmental Quality: An International Journal , 34 (5), 1269–1294.
Albino, V., Balice, A., & Dangelico, R. M. (2009). Environmental strategies and green product development: An overview on sustainability-driven companies. Business Strategy the Environment , 18 (2), 83–96.
Alhawari, O., Awan, U., Bhutta, M. K. S., & Ülkü, M. A. (2021). Insights from circular economy literature: A review of extant definitions and unravelling paths to future research. Sustainability , 13 (2), 859.
Andalib Ardakani, D., & Soltanmohammadi, A. (2019). Investigating and analysing the factors affecting the development of sustainable supply chain model in the industrial sectors. Corporate Social Responsibility and Environmental Management, 26 (1), 199–212.
Ba, S., Lisic, L. L., Liu, Q., Stallaert, J. J. P., & Management, O. (2013). Stock market reaction to green vehicle innovation. 22(4), 976–990.
Baumann, H., Boons, F., & Bragd, A. (2002). Mapping the green product development field: Engineering, policy and business perspectives. Journal of Cleaner Production , 10 (5), 409–425.
Blackwell, R. (2015). Canada dims the Light on the Incandescent Light Bulb. Retrieved from: https://www.theglobeandmail.com/report-on-business/industry-news/energy-and-resources/canada-dims-the-light-on-the-incandescent-light-bulb/article22739434/
Busso, C. A. (1997). Towards an increased and sustainable production in semi-arid rangelands of central Argentina: Two decades of research. Journal of Arid Environments , 36 (2), 197–210.
Chang, D., Lee, C. K. M., & Chen, C. H. (2014). Review of life cycle assessment towards sustainable product development. Journal of Cleaner Production , 83 , 48–60.
Chang, T. W., Chen, F. F., Luan, H. D., & Chen, Y. S. (2019). Effect of green organizational identity, green shared vision, and organizational citizenship behavior for the environment on green product development performance. Sustainability , 11 (3), 617.
Chang, T. W., Yeh, Y. L., & Li, H. X. (2020). How to shape an organization’s sustainable green management performance: The mediation effect of environmental corporate social responsibility. Sustainability , 12 (21), 9198.
Chen, C. J. M. S. (2001). Design for the environment: A quality-based model for green product development. 47(2), 250–263.
Chen, Y. S., & Chang, C. H. (2013). The determinants of green product development performance: Green dynamic capabilities, green transformational leadership, and green creativity. Journal of Business Ethics , 116 , 107–119.
Chen, Y. S., Lin, S. H., Lin, C. Y., Hung, S. T., Chang, C. W., & Huang, C. W. (2020). Improving green product development performance from green vision and organizational culture perspectives. Corporate Social Responsibility and Environmental Management , 27 (1), 222–231.
Article CAS Google Scholar
Coutinho, R. M., Ceryno, P. S., de Souza Campos, L. M., & Bouzon, M. (2019). A critical review on lean green product development: State of art and proposed conceptual framework. Environmental Engineering and Management Journal , 18 (11), 2319–2333.
Dangelico, R. M. (2017). What drives green product development and how do different antecedents affect market performance? A survey of Italian companies with eco-labels. Business Strategy and the Environment , 26 (8), 1144–1161.
Dangelico, R. M., Pontrandolfo, P., & Pujari, D. (2013). Developing sustainable new products in the textile and upholstered furniture industries: Role of external integrative capabilities. Journal of Product Innovation Management , 30 (4), 642–658.
de Medeiros, J. F., Lago, N. C., Colling, C., Ribeiro, J. L. D., & Marcon, A. (2018). Proposal of a novel reference system for the green product development process (GPDP). Journal of Cleaner Production , 187 , 984–995.
Den Hollander, M. C., Bakker, C. A., & Hultink, E. J. (2017). Product design in a circular economy: Development of a typology of key concepts and terms. Journal of Industrial Ecology , 21 (3), 517–525.
Deshmukh, P., & Tare, H. (2024). Green marketing and corporate social responsibility: A review of business practices. Multidisciplinary Reviews , 7 (3), 2024059–2024059.
Diana, H., & Harahap, I. (2023). The implementation of Maslahah Mursalah in the Circular Economy of Waste of UKM Tempe Berkah in Hamparan Perak Village. Istinbath , 22 (1), 21–40.
Fraccascia, L., Giannoccaro, I., & Albino, V. (2018). Green product development: What does the country product space imply? Journal of Cleaner Production , 170 , 1076–1088.
Goyal, S., Esposito, M., & Kapoor, A. (2018). Circular economy business models in developing economies: lessons from India on reduce, recycle, and reuse paradigms. Thunderbird International Business Review, 60 (5), 729–740.
Guo, S., Choi, T. M., & Shen, B. (2020). Green product development under competition: A study of the fashion apparel industry. European Journal of Operational Research , 280 (2), 523–538.
Hafezi, M., & Zolfagharinia, H. (2018). Green product development and environmental performance: Investigating the role of government regulations. International Journal of Production Economics , 204 , 395–410.
Hawken, P. (2017). Drawdown: The most comprehensive plan ever proposed to reverse global warming . Penguin.
Hengli, L., & Baoshun, L. (2011, May). Rethinking of green design and environmental protection. In 2011 International Symposium on Water Resource and Environmental Protection (Vol. 3, pp. 2280–2281). IEEE.
Huang, J., Leng, M., Liang, L., & Liu, J. (2013). Promoting electric automobiles: Supply chain analysis under a government’s subsidy incentive scheme. IIE Transactions , 45 (8), 826–844.
IenM (2016). A circular economy in the Netherlands by 2050. Published by: The Ministry of Infrastructure and the Environment and the Ministry of Economic Aff airs, also on behalf of the Ministry of Foreign Aff airs and the Ministry of the Interior and Kingdom Relations. www.government.nl/circular-economy/
Ilg, P. (2019). How to foster green product innovation in an inert sector. Journal of Innovation & Knowledge , 4 (2), 129–138.
Jabbour, C. J. C., Jugend, D., de Sousa Jabbour, A. B. L., Gunasekaran, A., & Latan, H. (2015). Green product development and performance of Brazilian firms: Measuring the role of human and technical aspects. Journal of Cleaner Production , 87 , 442–451.
Jacques, J. J., & Guimarães, L. J. W. (2012). A study of material composition disclosure practices in green footwear products. 41(Supplement 1), 2101–2108.
Jin, M., Zhang, X., Xiong, Y., & Zhou, Y. (2021). Implications of green optimism upon sustainable supply chain management. European Journal of Operational Research , 295 (1), 131–139.
Johansson, G., & Sundin, E. (2014). Lean and green product development: Two sides of the same coin? Journal of Cleaner Production , 85 , 104–121.
Jugend, D., Rojas Luiz, J. V., Jabbour, C., Silva, C. J., Lopes de Sousa Jabbour, S. L., Salgado, A. B., M. H. J. B. S., & Environment (2017). Green product development and product portfolio management: Empirical evidence from an emerging economy. 26(8), 1181–1195.
Kerber, J. C., de Souza, E. D., Fettermann, D. C., & Bouzon, M. (2023). Analysis of environmental consciousness towards sustainable consumption: An investigation on the smartphone case. Journal of Cleaner Production , 384 , 135543.
Khan, S. J., Dhir, A., Parida, V., & Papa, A. (2021). Past, present, and future of green product innovation. Business Strategy and the Environment , 30 (8), 4081–4106.
Khizar, H. M. U., Iqbal, M. J., Khalid, J., & Adomako, S. (2022). Addressing the conceptualization and measurement challenges of sustainability orientation: A systematic review and research agenda. Journal of Business Research , 142 , 718–743.
Khizar, H. M. U., Iqbal, M. J., & Rasheed, M. I. (2021). Business orientation and sustainable development: A systematic review of sustainability orientation literature and future research avenues. Sustainable Development , 29 (5), 1001–1017.
Kumar, S., Luthra, S., Govindan, K., Kumar, N., & Haleem, A. (2016). Barriers in green lean six sigma product development process: An ISM approach. Production Planning Control , 27 (7–8), 604–620.
Google Scholar
Kuo, T. C., & Smith, S. (2018). A systematic review of technologies involving eco-innovation for enterprises moving towards sustainability. Journal of Cleaner Production , 192 , 207–220.
Lai, Z., Lou, G., Zhang, T., & Fan, T. (2023). Financing and coordination strategies for a manufacturer with limited operating and green innovation capital: Bank credit financing versus supplier green investment. Annals of Operations Research , 331 (1), 85–119.
Le Van, Q., Nguyen, V., T., & Nguyen, M. H. (2019). Sustainable development and environmental policy: The engagement of stakeholders in green products in Vietnam. Business Strategy and the Environment , 28 (5), 675–687.
Luh, Y. P., Chu, C. H., & Pan, C. C. (2010). Data management of green product development with generic modularized product architecture. Computers in Industry , 61 (3), 223–234.
Marcon, A., Ribeiro, J. L. D., Dangelico, R. M., de Medeiros, J. F., & Marcon, É. (2022). Exploring green product attributes and their effect on consumer behaviour: A systematic review. Sustainable Production and Consumption , 32 , 76–91.
Mhofu, S. (2017). Zimbabwe Bans Plastic Foam Containers to Protect Environment. Retrieved from: https://www.voanews.com/a/zimbabwe-ban-plastic-foam/3945349 . html.
Miroshnychenko, I., Barontini, R., & Testa, F. (2017). Green practices and financial performance: A global outlook. Journal of Cleaner Production , 147 , 340–351.
Murali, K., Lim, M. K., & Petruzzi, N. C. (2019). The effects of ecolabels and environmental regulation on green product development. Manufacturing & Service Operations Management , 21 (3), 519–535.
Ng, C. Y., Lam, S. S., Choi, S. P., & Law, K. M. (2020). Optimizing green design using ant colony-based approach. The International Journal of Life Cycle Assessment , 25 , 600–610.
Omair, S., Khizar, H. M. U., Majeed, O., & Iqbal, M. J. (2023). Sustainability: Concept Clarification and Theory. Corporate sustainability in Africa: Responsible Leadership, opportunities, and challenges (pp. 375–404). Springer International Publishing.
Qiu, L., Jie, X., Wang, Y., & Zhao, M. (2020). Green product innovation, green dynamic capability, and competitive advantage: Evidence from Chinese manufacturing enterprises. Corporate Social Responsibility and Environmental Management , 27 (1), 146–165.
Rashid Khan, H. U., Awan, U., Zaman, K., Nassani, A. A., Haffar, M., & Abro, M. M. Q. (2021). Assessing hybrid solar-wind potential for industrial decarbonization strategies: Global shift to green development. Energies , 14 (22), 7620.
Reinhardt, F. L. (1998). Environmental product differentiation: Implications for corporate strategy. California Management Review , 40 (4), 43–73.
Sudirjo, F., Utami, E. Y., & Syahputri, A. (2024). Analyzing the evolution of sustainable product Development studies: A Bibliometric Review of Eco-friendly Innovation and Market Adoption. West Science Social and Humanities Studies , 2 (03), 458–464.
Tahmasebi Zadeh, H., Boyer, O. J. P. I., & Sustainability, O. (2021). f. A model for integrating green product development strategies and supply chain configuration considering market share. 5(3), 417–427.
Thorpe, R., Holt, R., Macpherson, A., & Pittaway, L. (2005). Using knowledge within small and medium‐sized firms: A systematic review of the evidence. International Journal of Management Reviews, 7 (4), 257–281.
Tian, Z. P., Wang, J., Wang, J. Q., & Zhang, H. Y. (2017). Simplified neutrosophic linguistic multi-criteria group decision-making approach to green product development. Group Decision and Negotiation , 26 , 597–627.
Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence‐informed management knowledge by means of systematic review. British Journal of Management, 14 (3), 207–222.
Tran, N. K. H. (2024). Customer pressure and creating Green Innovation: The role of Green thinking and Green Creativity. Sustainable Futures , 7 , 100177.
Tsai, C. C. (2012). A research on selecting criteria for new green product development project: Taking Taiwan consumer electronics products as an example. Journal of Cleaner Production , 25 , 106–115.
Tsai, M. T., Chuang, L. M., Chao, S. T., & Chang, H. P. (2012). The effects assessment of firm environmental strategy and customer environmental conscious on green product development. Environmental Monitoring and Assessment , 184 , 4435–4447.
Tseng, S. C., & Hung, S. W. (2013). A framework identifying the gaps between customers’ expectations and their perceptions in green products. Journal of Cleaner Production , 59 , 174–184.
Uemura Reche, A. Y., Junior, C., Szejka, O., A. L., & Rudek, M. (2022). Proposal for a preliminary model of integrated product development process oriented by green supply chain management. Sustainability , 14 (4), 2190.
Urbaniak, M. (2018). The role of green product development in building relationship in supply chain. Journal of Advanced Management Science , Vol (6), 2.
Wang, X., Chan, H. K., & Li, D. (2015). A case study of an integrated fuzzy methodology for green product development. European Journal of Operational Research , 241 (1), 212–223.
Xing, W., Zou, J., Liu, T. L. J. C., & Engineering, I. (2017). Integrated or decentralized: An analysis of channel structure for green products. 112, 20–34.
Xu, B., Xu, Q., Bo, Q., & Hu, Q. (2018). Green product development with consumer heterogeneity under horizontal competition. Sustainability, 10(6), 1902.
Yang, X., Xu, M., & Zhang, W. J. S. (2020). Can design for the environment be worthwhile? Green design for manufacturers brands when confronted with competition from store brands. 12(3), 1078.
Zhang, W., Sun, B., & Xu, F. (2020). Promoting green product development performance via leader green transformationality and employee green self-efficacy: The moderating role of environmental regulation. International Journal of Environmental Research and Public Health , 17 (18), 6678.
Zhang, X., Xu, X., & He, P. (2012). New product design strategies with subsidy policies. Journal of Systems Science and Systems Engineering , 21 (3), 356–371.
Zhao, L., & Chen, Y. J. S. (2019). Optimal subsidies for green products: a maximal policy benefit perspective. 11(1), 63.
Zhu, W., & He, Y. (2017). Green product design in supply chains under competition. European Journal of Operational Research , 258 (1), 165–180.
Zolfagharinia, H., Zangiabadi, M., & Hafezi, M. (2023). How much is enough? Government subsidies in supporting green product development. European Journal of Operational Research , 309 (3), 1316–1333.
Download references
This paper was supported by the National Natural Science Foundation of China (Grant Number: 72172094): Humanities and Social Sciences Foundation of the Ministry of Education in China (Grant Number: 21YJC630160); National Social Sciences Foundation Project – Late Funding (Grant Number: 21FGLB050).
Authors and affiliations.
Institute of Business Management & Administratice Sciences, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
Kinaan Khalid & Safeer Haider
Department of Marketing, School of Management, Shenzhen University, Shenzhen, China
Jingbo Yuan
Human Capital Research Center, United Arab Emirates University, AlAin, UAE
Hafiz Muhammad Usman Khizar
You can also search for this author in PubMed Google Scholar
Correspondence to Jingbo Yuan .
Conflict of interest.
The authors declare that there is no financial and any non-financial conflict of interest.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Khizar, H.M.U., Khalid, K., Haider, S. et al. Green product development (GPD): a systematic literature review and future research directions. Environ Dev Sustain (2024). https://doi.org/10.1007/s10668-024-05404-9
Download citation
Received : 20 July 2023
Accepted : 04 September 2024
Published : 21 September 2024
DOI : https://doi.org/10.1007/s10668-024-05404-9
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
VIDEO
COMMENTS
Another way to say Systematic Literature Review? Synonyms for Systematic Literature Review (other words and phrases for Systematic Literature Review).
Systematic Review | Definition, Example & Guide. Published on June 15, 2022 by Shaun Turney.Revised on November 20, 2023. A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.
Systematic reviews: Structure, form and content. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the ...
Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...
Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...
Identifying Synonyms & Related Terms. It is important not to overlook this stage in the search process. Time spent identifying all possible synonyms and related terms for each of your PICO elements or concepts will ensure that your search retrieves as many relevant records as possible. Think laterally about how others may describe the same concept.
Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.
Systematic reviews that summarize the available information on a topic are an important part of evidence-based health care. There are both research and non-research reasons for undertaking a literature review. It is important to systematically review the literature when one would like to justify the need for a study, to update personal ...
Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular research question in a way that is transparent and reproducible, while seeking to include all published evidence on the topic and appraising the quality of th is evidence. SRs have become a major methodology
A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).This ensures that relevant studies are included within the synthesis and ...
Abstract. A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter.
Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints. Time-limited formal quality assessment. Typically narrative and tabular.
Systematic Reviews synonyms - 102 Words and Phrases for Systematic Reviews. literature review. meta-analysis. available scientific evidence. comprehensive review. evidence synthesis. exhaustive deliberations. general inspections. integrative review.
A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a topic about research ...
Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.
A systematic literature review (SLR) is an independent academic method that aims to identify and evaluate all relevant literature on a topic in order to derive conclusions about the question under consideration. "Systematic reviews are undertaken to clarify the state of existing research and the implications that should be drawn from this."
The first stage in conducting a systematic review is to develop a protocol that clearly defines: 1) the aims and objectives of the review; 2) the inclusion and exclusion criteria for studies; 3) the way in which studies will be identified; and 4) the plan of analysis. Cochrane review protocols are peer reviewed and published on the Cochrane ...
methodological paper. meta-analysis design. survey of publications. working paper review. library research. bibliographical research. bibliographic revision. secondary research. research of research.
Need synonyms for systematic review? Here's a list of similar words from our thesaurus that you can use instead. Noun. Review of the literature. "A systematic review must be done prior to a meta-analysis that, where appropriate, combines statistical results from individual studies to give an estimate of the overall effect.". Find more words!
1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...
SLR Template. A Systematic Literature Review (SLR) template is a structured framework used for conducting and documenting a systematic review of existing research studies on a specific topic or research question. Systematic literature reviews are commonly used in academic and research settings to provide a comprehensive and unbiased summary of ...
Systematic Review - seeks to systematically search for, appraise and synthesize research evidence on a specific question, often adhering to guidelines on the conduct of a review.. Meta-analysis - a technique that statistically combines the results of quantitative studies to provide a more precise effect of the results. A good systematic review is essential to a meta-analysis of the literature.
384 other terms for systematic review - words and phrases with similar meaning.
In many literature reviews, you try to balance the sensitivity of the search (how many potentially relevant articles you find) and specificity (how many definitely relevant articles you find), realizing that you will miss some.In an evidence synthesis, you want a very sensitive search: you are trying to find all potentially relevant articles. An evidence synthesis search will:
Over the years, the concept of green product development (GPD) has received immense interest from scholars and practitioners around the globe. This study presents a state-of-the-art research profile and an in-depth review of the literature on GPD. Additionally, this review also critically evaluates existing literature to highlight the gaps and proposes an agenda for future development in the ...