This post courtesy of Anne Taylor-Vaisey:
This past weekend I attended a terrific CE event at CMCC, entitled Research Methods, Science Writing and Manuscript Preparation. It was facilitated by Dr. Ron Feise, a chiropractor, researcher, and president of the Institute for Evidence-Based Chiropractic:
http://www.chiroevidence.com/faculty_Feise.html
I learned a great deal and here is the first of some messages I intend to send you, based on the workshop. A rather startling fact we learned is that according to one study, roughly three quarters of the abstracts in peer reviewed journals contain misleading or incorrect information. The first two studies below are ones that Dr. Feise cited. I found a few more and have reproduced their abstracts for your interest. But are the abstracts that report untrustworthy information trustworthy?
Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA 1999; 281(12):1110-1111.
CONTEXT: The section of a research article most likely to be read is the abstract, and therefore it is particularly important that the abstract reflect the article faithfully.
OBJECTIVE: To assess abstracts accompanying research articles published in 6 medical journals with respect to whether data in the abstract could be verified in the article itself.
DESIGN: Analysis of simple random samples of 44 articles and their accompanying abstracts published during 1 year(July 1, 1996-June 30, 1997) in each of 5 major general medical journals (Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine) and a consecutive sample of 44 articles published during 15 months! (July 1, 1996-August 15, 1997) in the CMAJ.
MAIN OUTCOME MEASURE: Abstracts were considered deficient if they contained data that were either inconsistent with corresponding data in the article's body (including tables and figures) or not found in the body at all.
RESULTS: The proportion of deficient abstracts varied widely (18%-68%) and to a statistically significant degree (P<.001) among the 6 journals studied.
CONCLUSIONS: Data in the abstract that are inconsistent with or absent from the article's body are common, even in large-circulation general medical journals.
PubMed link
Gotzsche PC. Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. Control Clin Trials 1989; 10(1):31-56.
Abstract: Important design aspects were decreasingly reported in NSAID trials over the years, whereas the quality of statistical analysis improved. In half of the trials, the effect variables in the methods and results sections were not the same, and the interpretation of the erythrocyte sedimentation rate in the reports seemed to depend on whether a significant difference was found. Statistically significant results appeared in 93 reports (47%). In 73 trials they favored only the new drug, and in 8 only the active control. All 39 trials with a significant difference in side effects favored the new drug. Choice of dose, multiple comp! arisons, wrong calculation, subgroup and within-groups analyses, wrong sampling units (in 63% of trials for effect variables, in 23% for side effects), change in measurement scale before analysis, baseline difference, and selective reporting of significant results were some of the verified or possible causes for the large proportion of results that favored the new drug. Doubtful or invalid statements were found in 76% of the conclusions or abstracts. Bias consistently favored the new drug in 81 trials, and the control in only one trial. It is not obvious how a reliable meta-analysis could be done in these trials.
PubMed link
Harris AH, Standard S, Brunning JL, Casey SL, Goldberg JH, Oliver L et al. The accuracy of abstracts in psychology journals. J Psychol 2002; 136(2):141-148.
Abstract: This article provides an empirically supported reminder of the importance of accuracy in scientific communication. The authors identify common types of inaccuracies in research abstracts and offer suggestions to improve abstract-article agreement. Abstracts accompanying 13% of a random sample of 400 research articles published in 8 American Psychological Association journals during 1997 and 1998 contained data or claims inconsistent with or missing from the body of the article. Error rates ranged from 8% to 18%, although between-journal differences were not significant. Many errors (63%) were unlikely to cause substantive misinterpretations. Unfortunately, 37! % of errors found could be seriously misleading with respect to the data or claims presented in the associated article. Although deficient abstracts may be less common in psychology journals than in major medical journals (R. M. Pitkin, M. A. Branagan, & L. F. Burmeister, 1999), there is still cause for concern and need for improvement.
PubMed link
Herbison P. The reporting quality of abstracts of randomised controlled trials submitted to the ICS meeting in Heidelberg. Neurourol Urodyn 2005; 24(1):21-24.
AIMS: The quality of randomised controlled trials (RCTs) is associated with bias. Thus, reports of RCTs must have enough detail of key elements of quality to enable them to be interpreted properly. This study examines the quality of abstracts of RCTs reported at the ICS meeting in Heidelberg in 2002, using the CONSORT statement as the gold standard.
MATERIALS AND METHODS: All of the abstracts accepted for the meeting at Heidelberg were read to identify reports of RCTs. Copies of these were printed and examined to see whether they complied with the 22 items in the CONSORT statement. As these were all abstracts the first CONSORT item was changed so that to comply t! he title had to say it was a randomised trial. Each item was scored as not met, partially met, met.
RESULTS: Fifty-three reports of RCTs were found. Five of these were podium presentations, 14 discussion posters, and 34 non-discussion posters. Most reports did not comply with many of the items in the CONSORT statement, lacking particularly in technical details of the methods (only one study clearly reported hidden allocation to groups), and how the results were presented (only two studies fully reported results). Only 2/53 of the abstracts complied fully with more than 10 of the items, and 30/53 did not comply at all with 10 or more.
CONCLUSIONS: The quality of reporting of studies at ICS is so poor that it is difficult to interpret the results. Reporting was particularly poor on the details of the randomisation and the numeric results.
PubMed link
Krzyzanowska MK, Pintilie M, Brezden-Masley C, Dent R, Tannock IF. Quality of abstracts describing randomized trials in the proceedings of American Society of Clinical Oncology meetings: guidelines for improved reporting. J Clin Oncol 2004; 22(10):1993-1999.
PURPOSE: To evaluate the quality of reporting in abstracts describing randomized controlled trials (RCTs) included in the Proceedings of American Society of Clinical Oncology (ASCO) meetings and to propose reporting guidelines for abstracts that are submitted to future meetings.
METHODS: Guidelines for reporting of RCTs in abstracts were developed by extracting key elements from published guidelines for full reports of RCTs, and modified based on an expert survey. Abstracts presenting results of RCTs with sample size > or = 200 were identified from the ASCO Proc! eedings for the years 1989 to 1998. Information regarding the quality of each abstract was extracted, and a quality score (possible range, 0 to 10) was assigned based on adherence to the guidelines.
RESULTS: Brief description of the intervention, explicit identification of the primary end point, and presentation of results accompanied by statistical tests were regarded by experts as the most important items to include in an abstract, whereas presentation of secondary and subgroup analyses was the least important. Deficiencies in reporting were present in almost all of the 510 abstracts; for example, only 22% of the abstracts provided explicit identification of the primary end point. The median quality score was 5.5 (range, 2.0 to 8.5); the quality score improved with time (P <.0001) and was better for oral or plenary presentations (P =.0003).
CONCLUSION: The quality of reporting of RCTs in abstracts submitted to Annual Meetings of ASCO is suboptimal. Although space precl! udes the inclusion of details required in the final report, abstracts could be improved through the use of explicit minimal guidelines, which are suggested in this article.
PubMed link
Ward LG, Kendrach MG, Price SO. Accuracy of abstracts for original research articles in pharmacy journals. Ann Pharmacother 2004; 38(7-8):1173-1177.
BACKGROUND: Accuracy of abstracts representing original research articles is imperative since these are readily available and biomedical literature readers may not have access to the full-text article. Furthermore, previous reports document discrepancies in published original research abstracts compared with the full-text article.
OBJECTIVE: To determine the accuracy of abstracts for original research articles published in nationally represented, widely circulated pharmacy-specific journals (American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, The Consultant Pharmacist, Hospital Pharmacy, Journal of the American Pharmacists Association, Pharmacotherap! y: The Journal of Human Pharmacology and Drug Therapy) from June 2001 through May 2002.
METHODS: Outcome measures included an omission, defined as data in the abstract not located in the article. In addition, abstracts were considered deficient if these included an omission, inaccurate factual (i.e., qualitative and quantitative) information presented in the abstract that differed from information contained within the text, an inconsistency in following the "Instructions for Authors" for the respective journal, or a discrepancy between the placement of text in the manuscript and a structured abstract.
RESULTS: A total of 243 abstracts for original research articles were published in selected journal issues. Evaluation of these abstracts identified 60 (24.7%) abstracts containing omissions; 81 (33.3%) abstracts contained either an omission or inaccuracy. A total of 147 (60.5%) abstracts were classified as deficient.
CONCLUSIONS: Results of this analysis demonstrate that improvements are needed within abstracts for original research articles published in pharmacy-specific journals. Authors and peer reviewers should analyze the abstract contents closely to ensure that the abstract accurately represents the full-text article.
PubMed link