Power of Statistical Tests Used to Address Nonresponse Error in the Journal of Agricultural Education
DOI:
https://doi.org/10.5032/jae.2017.01300Keywords:
power, nonresponse error, statistical testsAbstract
As members of a profession committed to the dissemination of rigorous research pertaining to agricultural education, authors publishing in the Journal of Agricultural Education (JAE) must seek methods to evaluate and, when necessary, improve their research methods. The purpose of this study was to describe how authors of manuscripts published in JAE between 2006 and 2015 tested for nonresponse error. Results indicated that none of the studies’ tests had acceptable power to detect small effect sizes, 14.3% had acceptable power to detect medium effect sizes, and 43% of the studies’ tests did not have acceptable power to detect large effect sizes. These findings suggest that while authors frequently find no difference between respondents and others, the tests used to detect these differences are often not powerful enough to do so, leading to higher than acceptable risk for Type II error. Using the theory of planned behavior as a framework, we highlight these findings to spur change within the profession’s expectations of reporting statistical power when testing for nonresponse error and offer a primer to improve researchers’ perceived behavioral control over reporting power. We also offer specific suggestions for conducting and reporting the results of tests for nonresponse bias.