Cited 0 times in Scipus Cited Count

A newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity

DC Field Value Language
dc.contributor.authorSeo, HJ-
dc.contributor.authorKim, SY-
dc.contributor.authorLee, YJ-
dc.contributor.authorJang, BH-
dc.contributor.authorPark, JE-
dc.contributor.authorSheen, SS-
dc.contributor.authorHahn, SK-
dc.date.accessioned2018-05-04T00:23:34Z-
dc.date.available2018-05-04T00:23:34Z-
dc.date.issued2016-
dc.identifier.issn0895-4356-
dc.identifier.urihttp://repository.ajou.ac.kr/handle/201003/14735-
dc.description.abstractOBJECTIVE: To develop a study Design Algorithm for Medical Literature on Intervention (DAMI) and test its interrater reliability, construct validity, and ease of use. STUDY DESIGN AND SETTING: We developed and then revised the DAMI to include detailed instructions. To test the DAMI's reliability, we used a purposive sample of 134 primary, mainly nonrandomized studies. We then compared the study designs as classified by the original authors and through the DAMI. Unweighted kappa statistics were computed to test interrater reliability and construct validity based on the level of agreement between the original and DAMI classifications. Assessment time was also recorded to evaluate ease of use. RESULTS: The DAMI includes 13 study designs, including experimental and observational studies of interventions and exposure. Both the interrater reliability (unweighted kappa = 0.67: 95% CI [0.64-0.75]) and construct validity (unweighted kappa = 0.63, 95% CI [0.52-0.67]) were substantial. Mean classification time using the DAMI was 4.08 +/- 2.44 minutes (range, 0.51-10.92). CONCLUSIONS: The DAMI showed substantial interrater reliability and construct validity. Furthermore, given its ease of use, it could be used to accurately classify medical literature for systematic reviews of interventions although minimizing disagreement between authors of such reviews.-
dc.language.isoen-
dc.subject.MESHAlgorithms-
dc.subject.MESHConsensus-
dc.subject.MESHEvidence-Based Medicine-
dc.subject.MESHHumans-
dc.subject.MESHPeer Review, Research-
dc.subject.MESHQuality Control-
dc.subject.MESHReproducibility of Results-
dc.subject.MESHResearch Design-
dc.subject.MESHReview Literature as Topic-
dc.titleA newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity-
dc.typeArticle-
dc.identifier.pmid26408805-
dc.contributor.affiliatedAuthor신, 승수-
dc.type.localJournal Papers-
dc.identifier.doi10.1016/j.jclinepi.2015.09.013-
dc.citation.titleJournal of clinical epidemiology-
dc.citation.volume70-
dc.citation.date2016-
dc.citation.startPage200-
dc.citation.endPage205-
dc.identifier.bibliographicCitationJournal of clinical epidemiology, 70. : 200-205, 2016-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.identifier.eissn1878-5921-
dc.relation.journalidJ008954356-
Appears in Collections:
Journal Papers > School of Medicine / Graduate School of Medicine > Pulmonary & Critical Care Medicine
Files in This Item:
There are no files associated with this item.

qrcode

해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse