SciELO - Scientific Electronic Library Online

 
vol.27 número4Músculo Esquelético con Acción Muscular Antogonista: Características Metabólicas, Contratiles y MorfológicasMeningoencefalitis Zigomicótica Piogranulomatosa en un Puercoespín de Cola Pren sil (Coendou Prehensilis) índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

International Journal of Morphology

versión On-line ISSN 0717-9502

Int. J. Morphol. v.27 n.4 Temuco dic. 2009

http://dx.doi.org/10.4067/S0717-95022009000400035 

Int. J. Morphol.,27(4):1179-1186, 2009.

 

Systematic Review of Literature with Different Types of Designs

 

Revisión Sistemática de la Literatura con Diferentes Tipos de Diseños

 

*Carlos Manterola; **Manuel Vial; ***Viviana Pineda & ****Antonio Sanhueza

* MD., Ph.D., Full Professor, Hepatobiliary Surgery Unit, Department of Surgery, Faculty of Medicine, Universidad de La Frontera, Temuco, Chile.

** MD., Ph.D., Assistant Professor, Digestive Surgery Unit, Department of Surgery, Faculty of Medicine, Universidad de La Frontera, Temuco, Chile.

*** MD., MSc., Assistant Professor, Mastology Unit, Department of Surgery, Faculty of Medicine, Universidad de La Frontera, Temuco, Chile.

**** MSc., Ph.D., Associate Professor, Department of Mathematics, Faculty of Engineering, Universidad de La Frontera, Temuco, Chile.

Partially financed by project DI09-0060, Universidad de La Frontera, Temuco, Chile.

Correspondence to:


SUMMARY: In certain situations, which occur with particular frequency in the field of surgery and its related disciplines (where what predominates are observational-type studies), the management of randomized controlled clinical trials (RCT) is very difficult and as such, conducting a systematic review (SR) based on RCT and realice a meta-analysis is even more difficult. Therefore, we have generated a methodology for implementing a SR with different types of designs (including observational studies) as an alternative in order to clarify the uncertainty in the field of therapy when there are few RCT and the evidence relies so heavily on descriptive and observational studies. The aim of this study, was to set out a methodology that leads to a SR with various types of designs. Methodologycally, this is based on consideration of the different primary studies through the application of a methodological quality score made up of 3 items (type of study design, size of the population studied and methodology used in the study). Once assigned a point score, a calculation of weighted averages with their respective confidence intervals of 95% is applied to each variable to be studied, which finally enables to apply a meta-analysis and compare groups. A methodological proposal leading to a SR with various types of designs.

KEY WORDS: Review Literature; Meta-Analysis; Evidence-Based Medicine; Intervention Studies; Epidemiologic Studies; Evaluation Studies.


RESUMEN: En ciertas situaciones, en especial frecuentes en el ámbito de la cirugía y sus disciplinas afines (donde lo que predomina son estudios de tipo observacional), la conducción de ensayos clínicos con asignación aleatoria (EC) es muy difícil; por ende, realizar revisiones sistemáticas (RS) con base en EC y posteriormente meta-analizar la información lo es aún más. Por esta razón hemos generado una metodología para realizar RS con diferentes tipos de diseños (incluyendo estudios observacionales), como una alternativa para aclarar la incertidumbre en el ámbito de la terapia cuando existen pocos EC y la evidencia se apoya fundamentalmente en estudios descriptivos y observacionales. El objetivo de este estudio fue exponer una metodología para conducir RS con diversos tipos de diseños. La metodología, se basa en la ponderación de los diferentes estudios primarios a través de la aplicación de un escore de calidad metodológica compuesto por 3 ítems (tipo de diseño del estudio, tamaño de la población estudiada y metodología empleada en el estudio). Una vez asignado un puntaje se aplica un cálculo de promedios ponderados con sus respectivos intervalos de confianza del 95% a cada variable que se desee estudiar, lo que permite finalmente realizar un meta-análisis y comparar grupos. Se presenta una propuesta metodológica para conducir RS con diversos tipos de diseños.

Palabras clave: Revisión sistemática de la literatura; Meta-análisis; Medicina basada en evidencia; Estudios prospectivos; Estudios epidemiológicos; Estudios de evaluación.


INTRODUCTION

Possibly one of the major dilemmas of the clinician in everyday practice is deciding what treatment to choose for a patient. Unfortunately, the information available is so extensive that, curiously, instead of helping us make the best decisions, it often only serves to add to the confusion and we end up with greater uncertainty than before beginning our search for information.

It is for this reason, although it might seem a platitude, that logic indicates that for all dilemmas the scientific method must be applied in such a way as to advance rationally towards the solution. Therefore, it is essential to develop the scheme in Figure 1, which clearly illustrates that the first step is defining the clinical problem and transforming it into an answerable question; then, with a well defined question, we can find an answer, which we have to look for in an appropriate source of knowledge; then, we have to design an information search strategy, analyze the information carefully and, finally, summarize the evidence found according to its level, methodological quality, validity and reliability. This process can be summed up in one sentence: "to look for the best evidence available with the maximum efficiency in order to respond to a research question" (Manterola, 2001).

Fig. 1. Diagram of a work strategy to pass from a daily clinical problem to the summary of the evidence available.

Nevertheless, it is important to remember that the evidence, which is extracted from "levels of evidence", is no more than a scale which tries to "qualify" the evidence available, so that the existing clinical designs for investigation are hierarchized according to their robustness in various fields of clinical knowledge (treatment, prevention, etiology and damage, prognosis, diagnosis and economic analyses) as outlined with the updated evidence table for treatment studies published by the Centre for Evidence-Based Medicine in Oxford (CEBM, 2007) (Table I). This one, like any scale, is made up of numerical values, whose meaning could be translated as "excellent", "good", "satisfactory", "bad", "very bad", etc., and provides us with a useful tool for referring to the evidence that supports our actions. These qualifications stem from the validity and reliability of articles (Streiner & Norman, 1995; Manterola & Riedemann, 2002).

However, let us return to the original concern: how to choose a treatment based on evidence or how to look for the best evidence of a determined treatment efficiently. When observing Table I, it can be understood that reading or implementing a high-quality randomized controlled clinical trial (RCT) or a systematic review (SR) that uses high-quality RCT (wrongly called meta-analysis), we are answering our concern efficiently and with an optimal level of evidence. In certain situations, however, which occur with particular frequency in the field of surgery and its related disciplines (where what predominate are observational-type studies), the management of RCT with double masking is very difficult or simply impossible. Therefore, the other option is to try to clarify the uncertainty by means of a SR. Unfortunately, in the field of surgery and related disciplines the development of RCT is complex and just as problematic as finding or conducting a SR.

Therefore, we have created a methodology to perform a SR with different types of designs as an alternative in order to clarify the uncertainty in an area of treatment when there are few RCT and the evidence relies heavily on descriptive and observational studies. This initiative has been assessed by the Cochrane Methodology group (Cochrane, 2007).

The aim of this paper is to set forth the proposal that, simply put, can be applied by various clinical research groups when they cannot perform SR with traditional meta-analysis.

SR: Systematic review. RCT: Randomized controlled clinical study. Note: This is part of the scale which can be viewed on the web page because the original includes levels of evidence for other types of scenarios (prognosis, diagnosis, etc.).

MATERIAL AND METHOD

Selection of the primary studies. Once the research question has been posed and the general and secondary aims set out, it remains to define the study variables for the performance of the SR. Then, the primary study population with which one is going to work needs to be detailed, which means specifying the inclusion and exclusion criteria; for example, it is possible to include studies with neither language restriction nor date of publication, where the effect of the use of a treatment is compared to a placebo administered prior to decision-making, in adults, with no gender restriction. Excluded were review or newsgroup articles, letters to the editor, clinical guides and other SR, articles with a topic not related to the study or with a sample contaminated by a patient or patients who did not fulfill some criterion or criteria of inclusion or articles whose summary is not available in the database we searched. The study response variables will need to be subsequently defined.

Search for information. Next, the databases used to search for articles (Cochrane Library, MedLine, EMBASE, SciELO, LILACS, etc.) will need to be specified and a strategy developed to search for primary studies, for which it is crucial to stipulate the target population and the inclusion criteria. In an initial stage, conducting a sensitive search is advised, using free MeSH terms (Medical Subject Headings) and "free words", so that finding a high percentage of published literature is ensured. A specific search can then be performed, adding Boolean terms and limits in order to significantly reduce the number of articles.

Analysis of the studies and data extraction. Afterwards, a methodology is to be applied for reviewing the articles that fulfilled the selection criteria, for which it is advised to use critical reading guides for treatment (Manterola et al., 2004) and apply validity tables (SIGN, 2004).

Application of methodological quality score. The selected articles must then be subjected to an analysis of methodological quality, applying the score designed by the authors, which has face and content validity. This is made up of 3 items: the first, related to the type of study design; the second, to the size of the population studied adjusted according to whether the size of the sample is justifiable or not; and the third to the methodology used in the study at issue (aims, justification of the design, eligibility criteria of the sample and their justification). This way, a scale is created that is the sum of items 1, 2 and 3, whose final point score

can fluctuate between 6 and 36 points, assigning 6 points to studies of lower methodological quality and 36 points to those of the highest quality, with a quality cut-off of 18 points (Table II). This score is to be applied independently by 2 investigators, who, in cases of discrepancies, appeal to the opinion of a third investigator and these then arrive at a decision by consensus.

 

Plan of analysis. Once each article selected has been assigned a point score in relation to its methodological quality, an exploratory analysis of the data is conducted, descriptive statistics are applied and a "calculation of weighted averages" (product of the sum of the study variables for the methodological quality score of the article where it came from, divided by the sum of the scores for the methodological quality of all the articles in the study) to each variable to be compared (Fig. 2). This guarantees the safeguard that articles of good methodological quality, with a good level of evidence design (for example an RCT or a prospective cohort study), are fairly represented when comparing results with an article of lower methodological quality, with a low level of evidence design (for example, a series of cases), even though the latter has a higher number of subjects treated. This idea can be better understood when observing Table III, which corresponds to the abstract of an Excel sheet in which the treatment of the variable "morbidity" appears in any SR. In the first column is the author, in the second the number of subjects treated, in the third year of publication, in the fourth morbidity reported in the primary article, in the fifth the product between the variable morbidity by methodological quality score of the primary article and in the last column the methodological quality score for each article studied. Thus, the great dispersion of morbidity values reported by different studies can be confirmed, in such a way that one can detect studies of poor methodological quality (scoring of 6, 7 and 8 points) that report morbidity figures of 4.0%, 7.5%, and 3.8%, respectively. On the other hand, studies of high methodological quality can be seen (scoring 29 and 30 points) that report morbidity numbers of 25.3% and 27.8%, respectively. Which are more trustworthy? When observing the last line of the fifth column, the weighted average of the variable morbidity is verified for this group of studies of varying design type. The weighted average of the variable morbidity for this entire group of articles is 20.92%, a figure closer to the one reported by the studies of high methodological quality. In the square beside it, the value "156" can be seen, which corresponds to the sum of the scores of all the studies, the average of which was 13 points. This means, in general terms, that except for two studies, all the others were of mediocre or poor methodological quality. Nonetheless, this exercise allows us to summarize the information available and then compare it to the results obtained for the same variable for another treatment group.

Fig. 2. Representation of the formula for calculating the weighted averages for a study variable.

Based on this methodology, we have answered the following questions: What is the best surgical option for treating patients with morbid obesity, considering open and laparoscopic techniques? (Manterola et al., 2005a), What is the best surgical option for treating patients with uncomplicated colon cancer, when comparing open and laparoscopic techniques? (Manterola et al., 2005b), What is the best therapy for patients with choledocolithiasis with the gall bladder in situ, comparing endoscopic and surgical, laparoscopic and open procedures? (Vial et al., 2005). In the same way, we have performed bibliometric studies, using the score previously indicated to value the methodological quality of various surgical journals (Pineda et al., 2005; Manterola et al., 2006a, 2006b).

RESULTS

The results report the total number of related articles that fulfilled the selection criteria according to the databases in which they were found, describe the design of articles analyzed and their distribution into treatment or placebo, the year of publication, the methodological quality of the studies, placing particular emphasis on the comparison groups, the number of patients studied in primary articles, the treatment schemes used (if applicable), the reporting of adverse effects and finally, the comparison of the resultant behavior of the variables of interest between the groups studied. For this, comparative tables can be created, in which a column is used to express the weighted averages obtained for the articles that used the test treatment and another column for the weighted averages obtained for the articles that used the placebo; both figures with their respective confidence intervals of 95%, which enables a kind of meta-analysis of the information.

DISCUSSION

It is assumed that science is cumulative, but evidence is rarely accumulated systematically. Evidence is usually handled via a plethora of articles and reports of primary studies, without the new data being integrated into the context of previous or similar studies. For this reason, there is the alternative of performing or consulting a SR. An observational and analytical type of design, which makes it possible to analyze (among others) published evidence in relation to some topic in particular and, therefore, to compare two or more kinds of intervention for a result of a determined interest (Manterola & Riedemann, 2001).

What is the level of evidence and the methodological quality of surgical publications? In this respect, we can say that the most frequently found methodological faults were: absence of a research question or working hypothesis, vague objectives, low level of evidence designs, small percentage of RCT, absence of patient selection criteria, non-justification of the sample used and inappropriate use of statistics. To this, it should be added that the vast majority of the RCT found were of poor quality and not very reliable (Pineda et al.; Manterola et al., 2006a, 2006b), a fact which adds a point of greater controversy to the issue of how valid and reliable the results are, which are obtained from a SR performed with these types of primary studies, since we started from the basis that the majority is not of level 1b but of 2b.

Why is it so difficult to conduct RCT in some areas of clinical practice, surgery and related disciplines? This is associated on the one hand with the degree of difficulty in conducting this type of study and on the other essentially with the following facts: the need to work with numerous and comparable groups of patients, therefore considerable sample sizes if what is desired is to obtain results with appropriate power and the problem of administering placebos to the "control group", a group that is sometimes the one compared to a group that was operated on (McLeod et al., 1996; Howes et al., 1997; Sondenaa et al., 1997).

We also have the problem of masking, because numerous interventions are performed on the patient directly by the clinic or the surgeon (for example, in an investigation that involves a wound or scar, we could only compare surgical techniques performed in a similar fashion or by a more or less common access, otherwise in order for the one receiving or measuring the effect of the therapy to remain masked, we would have to inflict a wound similar to those of the study group on patients from the control group, which would be ethically unacceptable); in the field of surgery, we know that interventions are "operator-dependant": that there are therefore greater ethical controversies in assigning a random allocation and, finally, because the training period directly influences results (Lee et al., 2000; Horton, 1996; McCulloch et al., 2002).

For all that has been expounded here, it seems to us that an alternative option to advance trying to clarify the uncertainty if we did not have RCT or SR based on these, is the performance of a SR that can contribute a level of evidence 2 or 3, working with varied research designs, for example with observational studies that are sometimes the only design alternative to respond a question (Hu et al., 1996); rigorously safeguarding the methodological quality of each study analyzed in such a way that the weight assigned to an RCT bears relation to a series of cases; that the appropriate weight is granted to a study with an "n" of 3000 treated subjects when comparing it to a study with an "n" of 30 cases; that a study with clear and concrete objectives that define selection criteria, justifies the sample used, etc., is appropriately valued in relation to a study that lacks objectives or that does not mention selection criteria of patients to treat or does not justify the sample size used.

Now, as it always interests to us to make comparisons, it is possible that with this methodology we could not determine relative risk or odds ratios; comparisons can indeed be made, however, through the weighted averages and their respective confidence intervals. It must be remembered that the meta-analysis is nothing other than a way of calculating an average or "common effect" and this is possible to carry out when more than one study has calculated an effect, when the results were obtained from similar measures and there is access to data (Pettiti, 2000; Clarke & Oxman, 2007; Egger et al., 2001). In fact, the issue of the heterogeneity of the studies is somehow minimized with the application of the methodological quality score and the weighting of averages.

Therefore, if one must answer the question whether a SR can be performed using different types of designs, it seems to us that the answer is yes. To do this, it is necessary to conduct an exhaustive search of the best evidence, to read critically the studies found extracting the data, to apply a methodological assessment of articles, to weigh the results according to their methodological quality and finally to compare them. However, to this must be added the generalization of the results to other populations, which must be left up to the authors executing the SR.

A simple and novel methodological proposal leading to a systematic review with various clinical types of designs is presented.

This alternative, can be applied by various research groups when they cannot perform SR with traditional meta-analysis.

 

REFERENCES

C.E.B.M. Centre for Evidence-Based Medicine. Available from: http://www.cebm.net/index.aspx?o=1025, access on 16 May, 2007.         [ Links ]

Clarke, M. & Oxman, A.D. Cochrane Reviewers' Handbook 4.2.0 [updated March 2003]. Available from: http://www.cochrane.dk/cochrane/handbook/handbook.htm. access on 17 May, 2007.         [ Links ]

Cochrane, B. V. S. Available from: http://cochrane.bvsalud.org/cochrane/main.php?lib=COC&searchExp=manterola&lang=pt, access on 16 May, 2007.         [ Links ]

Egger, M.; Davey S.; Smith, G. & Altman, D. G. Systematic reviews in healthcare. Meta-analysis in context. 2nd ed. London, BMJ Publishing Group, 2001.         [ Links ]

Howes, N.; Chagla, L.; Thorpe, M. & McCulloch P. Surgical practice is evidence based. Br. J. Surg., 84:1220-3, 1997.         [ Links ]

Horton, R. Surgical research or comic opera: questions, but few answers. Lancet, 347:984-5, 1996.         [ Links ]

Hu, X.; Wright, J. G.; McLeod, R. S.; Lossing, A. & Walters B. C. Observational studies as alternatives to randomized clinical trials in surgical clinical research. Surgery, 119:473-5, 1996.         [ Links ]

Lee, J. S.; Urschel, D. M. & Urschel, J. D. Is general thoracic surgical practice evidence based? Ann. Thorac. Surg., 70:429-31, 2000.         [ Links ]

Manterola, C. The process that leads to the development of the scientific research. Its application in surgery. Rev. Chil. Cir., 53:104-9, 2001.         [ Links ]

Manterola, C. & Riedemann, P. The process of measurement with qualitative variables and their application in surgery. Rev. Chil. Cir., 54:307-15, 2002.         [ Links ]

Manterola, C.; Vial, M.; Pineda, V. & Losada, H. Critical revision of Literature for therapy articles. Rev. Chil. Cir., 56:604-9, 2004.         [ Links ]

Manterola, C.; Pineda, V.; Vial, M.; Losada, H. & Munoz, S. Surgery for morbid obesity: selection of operation based on evidence from literature review. Obes. Surg., 15:106-13, 2005a.         [ Links ]

Manterola, C.; Pineda, V. & Vial, M. Open versus laparoscopic resection in non-complicated colon cancer. A systematic review. Cir. Esp., 78:28-33, 2005b.         [ Links ]

Manterola, C.; Busquets, J.; Pascual, M. & Grande, L. What is the methodological quality of articles on therapeutic procedures published in Cirugía Española? Cir. Esp., 79:95-100, 2006a.         [ Links ]

Manterola, C.; Pineda, V.; Vial, M.; Losada, H. & the MINCIR Group. What is the methodologic quality of human therapy studies in ISI surgical publications? Ann. Surg., 244:827-32, 2006b.         [ Links ]

Manterola, C. & Riedemann, P. Strategies of investigation. An analytical and observacional design. The Meta-analysis. Rev. Chil. Cir., 53:615-21, 2001.         [ Links ]

McCulloch, P.; Taylor, I.; Sasako, M.; Lovett, B. & Griffin, D. Randomised trials in surgery: problems and possible solutions. BMJ., 324:1448-51, 2002.         [ Links ]

McLeod, R. S.; Wright, J. G.; Solomon, M. J.; Hu, X.; Walters, B. C. & Lossing A. Randomized controlled trials in surgery: Issues and problems. Surgery, 119:483-6, 1996.         [ Links ]

Petitti, D. Meta-analysis, decision analysis, and cost-effectiveness analysis. 2nd ed. New York, Oxford University Press, 2000.         [ Links ]

Pineda, V.; Manterola, C.; Vial M. & Losada, H. What is the methodologic quality of human therapy studies in Revista Chilena de Cirugia? Rev. Chil. Cir., 57:500-7, 2005.         [ Links ]

SIGN. Scottish Intercollegiate Guidelines Network. A guidelines developers' handbook. Edinburgh, SIGN, 2004.

Sondenaa, K.; Nesvik, I.; Solhaug, J. H. & Soreide, O. Randomization to surgery or observation in patients with symptomatic gallbladder stone disease. The problem of evidence-based medicine in clinical practice. Scand. J. Gastroenterol., 32:611-6, 1997.         [ Links ]

Streiner, D. L. & Norman, G. R. Health measurement scales. A practical guide to their development and use. New York, Oxford University Press, 1995. pp.1-3.         [ Links ]

Vial, M.; Manterola, C.; Pineda, V. & Losada, H. Choledocolitiasis. Election of a therapy based on the evidence. Systematic review. Rev. Chil. Cir., 57:404-11, 2005.         [ Links ]

Correspondence to:
Dr. Carlos Manterola
Departamento de Cirugía
Universidad de La Frontera. Casilla 54-D,
Temuco, CHILE
Fax: 56-45-325761

Email: cmantero@ufro.cl

Received: 04-08-2009
Accepted: 19-09-2009