Show simple item record

dc.contributor.authorLevin, Joel R.
dc.contributor.authorRobinson, Daniel H.
dc.date.accessioned2017-09-18T16:20:46Z
dc.date.available2017-09-18T16:20:46Z
dc.date.issued5-1-2003
dc.identifier.citationPublished in Journal of Modern Applied Statistical Methods 2(1):231-236, May 2003en_US
dc.identifier.issn1538 – 9472
dc.identifier.urihttp://hdl.handle.net/10106/26930
dc.description.abstractIn this commentary, we offer a perspective on the problem of authors reporting and interpreting effect sizes in the absence of formal statistical tests of their chanceness. The perspective reinforces our previous distinction between single -study investigations and multiple -study syntheses.en_US
dc.language.isoen_USen_US
dc.publisherDigitalCommons@WayneStateen_US
dc.publisherWayne State University Pressen_US
dc.subjectEffect sizesen_US
dc.subjectStatistical testsen_US
dc.subjectSingle -study -- Investigationsen_US
dc.subjectMultiple -study -- Synthesesen_US
dc.titleThe Trouble With Interpreting Statistically Nonsignificant Effect Sizes in Single-Study Investigationsen_US
dc.typeArticleen_US
dc.publisher.departmentDepartment of Curriculum and Instruction, The University of Texas at Arlingtonen_US
dc.identifier.externalLinkhttp://digitalcommons.wayne.edu/jmasm/vol2/iss1/23
dc.identifier.externalLinkDescriptionThe original publication is available at Article DOIen_US
dc.identifier.externalLinkDescriptionThe original publication is available at the journal homepageen_US
dc.rights.licensePublished open access through Digital Commons@WayneState
dc.identifier.doiDOI: 10.22237/jmasm/1051748580


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record