International Electronic Journal of Mathematics Education

International Electronic Journal of Mathematics Education
A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth
APA
In-text citation: (McGowen & Davis, 2022)
Reference: McGowen, M. A., & Davis, G. E. (2022). A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth. International Electronic Journal of Mathematics Education, 17(1), em0669. https://doi.org/10.29333/iejme/11473
AMA
In-text citation: (1), (2), (3), etc.
Reference: McGowen MA, Davis GE. A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth. INT ELECT J MATH ED. 2022;17(1), em0669. https://doi.org/10.29333/iejme/11473
Chicago
In-text citation: (McGowen and Davis, 2022)
Reference: McGowen, Mercedes A., and Gary E. Davis. "A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth". International Electronic Journal of Mathematics Education 2022 17 no. 1 (2022): em0669. https://doi.org/10.29333/iejme/11473
Harvard
In-text citation: (McGowen and Davis, 2022)
Reference: McGowen, M. A., and Davis, G. E. (2022). A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth. International Electronic Journal of Mathematics Education, 17(1), em0669. https://doi.org/10.29333/iejme/11473
MLA
In-text citation: (McGowen and Davis, 2022)
Reference: McGowen, Mercedes A. et al. "A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth". International Electronic Journal of Mathematics Education, vol. 17, no. 1, 2022, em0669. https://doi.org/10.29333/iejme/11473
Vancouver
In-text citation: (1), (2), (3), etc.
Reference: McGowen MA, Davis GE. A Numerical Indicator of Student Cognitive Engagement and Mathematical Growth. INT ELECT J MATH ED. 2022;17(1):em0669. https://doi.org/10.29333/iejme/11473

Abstract

We discuss and examine a numerical indicator—the individual gain—of students’ engagement and mathematical growth in relation to an instructor’s course aims and goals. The individual gain statistic assesses the fractional amount an individual student improves initial-test to final-test. We argue that an initial-test score and a final-test score, if the two tests are related to each other and to a course focus, can provide a numerical indication of a student’s engagement with the goals and aims of the course and the extent to which a student was prepared to work toward those goals. Results on the distribution of individual gain for students in two-year college developmental mathematics courses and in sections of a course for pre-service elementary teachers are discussed. We detail and discuss advantages of the full distribution of individual gain, particularly for allowing statistical inference for differences, compared with Richard Hake’s use of mean gain of reform classes in undergraduate physics. Other instructional benefits of using the gain statistic to examine distribution of individual student gains include: a pre-test formative assessment at beginning of instruction, providing an instructor with data for specific, targeted remediation; and planning information that informs an instructor for the effectiveness of instruction for students in that cohort.

References

  • Bao, L. (2006). Theoretical comparisons of average normalized gain calculations. American Journal of Physics, 74(10), 917-922. https://doi.org/10.1119/1.2213632
  • Bonate, P. L. (2000). Analysis of pretest-posttest designs. CRC Press. https://doi.org/10.1201/9781420035926
  • Bouyssou, D., Marchant, T., Pirlot, M., Perny, P., Tsoukias, A., & Vincke, P. (2000). Evaluation and decision models: A critical perspective (Vol. 32). Springer Science & Business Media.
  • Bouyssou, D., Marchant, T., Pirlot, M., Tsoukias, A., & Vincke, P. (2006). Evaluation and decision models with multiple criteria: Stepping stones for the analyst (Vol. 86). Springer Science & Business Media. https://doi.org/10.1007/978-1-4615-1593-7
  • Brookhart, S. M. (1999). The art and science of classroom assessment: The missing part of pedagogy. The George Washington University Graduate School of Education and Human Development.
  • Burkholder, L. (2015). Impartial grading revisited. Teaching Philosophy, 38(3), 261-272. https://doi.org/10.5840/teachphil201581136
  • Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Ravenio Books.
  • Chi, M. T., Adams, J., Bogusch, E. B., Bruchok, C., Kang, S., Lancaster, M., Levy, R., Li, N., McEldoon, K. L., Stump, G. S., Wiley, R., Xu, D., & Yaghmourian, D. L. (2018). Translating the ICAP theory of cognitive engagement into practice. Cognitive Science, 42(6), 1777-1832. https://doi.org/10.1111/cogs.12626
  • Close, D. (2009). Fair grades. Teaching Philosophy, 32(4), 361-398. https://doi.org/10.5840/teachphil200932439
  • Cobern, W. W., Schuster, D., Adams, B., Applegate, B., Skjold, B., Undreiu, A., Loving, C. C., & Gobert, J. D. (2010). Experimental comparison of inquiry and direct instruction in science. Research in Science & Technological Education, 28(1), 81-96. https://doi.org/10.1080/02635140903513599
  • Concetta Capizzo, M., Nuzzo, S., & Zarcone, M. (2006). The impact of the pre-instructional cognitive profile on learning gain and final exam of physics courses: A case study. European Journal of Engineering Education, 31(6), 717-727. https://doi.org/10.1080/03043790600911811
  • Crooks, T. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438-481. https://doi.org/10.3102/00346543058004438
  • Davis, G. E., & McGowen, M. A. (2001). Jennifer’s journey: Seeing and remembering mathematical connections in a pre-service elementary teachers’ course. In M. van den Heuvel-Panhuizen (Ed.), Proceedings of the 25th conference of the International Group for the Psychology of Mathematics Education (vol. 2, pp. 305-312), Freudenthal Institute, Utrecht University, The Netherlands.
  • Davis, G. E., & McGowen, M. A. (2004). Individual gain and engagement with mathematical understanding. In Proceedings of the XVI Annual Meeting, North American Chapter of International Group for the Psychology of Mathematics Education (vol. 1, pp. 333-341). Toronto, Canada.
  • Dimitrov, D. M., & Rumrill Jr, P. D. (2003). Pretest-posttest designs and measurement of change. Work, 20(2), 159-165.
  • Docktor, J. L., & Mestre, J. P. (2014). Synthesis of discipline-based education research in physics. Physical Review Special Topics-Physics Education Research, 10(2), 020119. https://doi.org/10.1103/PhysRevSTPER.10.020119
  • Fadaei, A. S. (2019). Comparing two results: Hake Gain and Dellow Gain, to analyze FCI data in active learning process. US-China Education Review, 9(1), 31-39. https://doi.org/10.17265/2161-623X/2019.01.003
  • Foegen, A. (2000). Technical adequacy of general outcome measures for middle school mathematics. Diagnostique, 25(3), 175-203. https://doi.org/10.1177/073724770002500301
  • Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measures in mathematics: A review of the literature. The Journal of Special Education, 41(2), 121-139. https://doi.org/10.1177/00224669070410020101
  • Fredricks, J. A., & McColskey, W. (2012). The measurement of student engagement: A comparative analysis of various methods and student self-report instruments. In Handbook of research on student engagement (pp. 763-782). Springer. https://doi.org/10.1007/978-1-4614-2018-7_37
  • Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59-109. https://doi.org/10.3102/00346543074001059
  • Fuchs, L. S., & Fuchs, D. (1986). Linking assessment to instructional intervention: An overview. School Psychology Review, 15(3), 318-323. https://doi.org/10.1080/02796015.1986.12085236
  • Hake, R. (2002). Lessons from the physics education reform effort. Conservation Ecology, 5(2), 28. https://doi.org/10.5751/ES-00286-050228
  • Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64-74. https://doi.org/10.1119/1.18809
  • Hake, R. R. (1999). Analyzing change/gain scores. http://www.physics.indiana.edu/~sdi/AnalyzingChange-Gain.pdf
  • Hammons, J. O., & Barnsley, J. R. (1992). Everything you need to know about developing a grading plan for your course (well, almost). Journal on Excellence in College Teaching, 3, 51-68.
  • Helme, S., & Clarke, D. (2001). Identifying cognitive engagement in the mathematics classroom. Mathematics Education Research Journal, 13(2), 133-153. https://doi.org/10.1007/BF03217103
  • Knight, J. K., Wise, S. B., & Southard, K. M. (2013). Understanding clicker discussions: Student reasoning and the impact of instructional cues. CBE—Life Sciences Education, 12(4), 645-654. https://doi.org/10.1187/cbe.13-05-0090
  • Marx, J. D., & Cummings, K. (2007). Normalized change. American Journal of Physics, 75(1), 87-91. https://doi.org/10.1119/1.2372468
  • McCrickerd, J. (2012). What can be fairly factored into final grades? Teaching Philosophy, 35(3), 275-291. https://doi.org/10.5840/teachphil201235329
  • McGowen, M. A., & Davis, G. E. (2001). Changing elementary teachers’ attitudes to algebra. In H. Chick, K. Stacey, J. Vincent, & J. Vincent (Eds.), Proceedings of the 12th ICMI Study on The Future of the Teaching and Learning of Algebra (vol. 2, pp. 438-335), The University of Melbourne, Australia. http://repository.unimelb.edu.au/10187/2812
  • McGowen, M. A., & Davis, G. E. (2002). Growth and development of pre-service elementary teachers’ mathematical knowledge. In D. S. Mewborn (Ed.), Proceedings of the XIX Annual Meeting, North American Chapter of International Group for the Psychology of Mathematics Education (vol. 3, pp. 1135-1144). Athens, Georgia. https://eric.ed.gov/?q=ED471747
  • McGowen, M. A., & Davis, G. E. (2014). Individual gain and engagement with mathematical understanding. https://arxiv.org/pdf/1406.2269.pdf
  • Milton, O., Pollio, H. R., & Eison, J. A. (1986). Making sense of college grades. Jossey-Bass.
  • Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research, and Evaluation, 7(1), 3.
  • Moskal, B. M. (2002). Recommendations for developing classroom performance assessments and scoring rubrics. Practical Assessment, Research, and Evaluation, 8(1), 14.
  • Redish, E. F., Saul, J. M., & Steinberg, R. N. (1997). On the effectiveness of active-engagement microcomputer-based laboratories. American Journal of Physics, 65(1), 45-54. https://doi.org/10.1119/1.18498
  • Setiawan, A. R., & Kudus, M. (2020). What is the best way to analyze pre–post data? EdArXiv. https://doi.org/10.35542/osf.io/h4e6q
  • Skemp, R. R. (1976). Relational understanding and instrumental understanding. Mathematics Teaching, 77, 153-166.
  • Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE—Life Sciences Education, 10(1), 55-63. https://doi.org/10.1187/cbe.10-08-0101
  • Törnqvist, L., Vartia, P., & Vartia, Y. O. (1985). How should relative changes be measured? The American Statistician, 39(1), 43-46. https://doi.org/10.1080/00031305.1985.10479385
  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14. https://doi.org/10.1016/j.stueduc.2011.03.001
  • Willingham, W. W., Pollack, J. M., & Lewis, C. (2002). Grades and test scores: Accounting for observed differences. Journal of Educational Measurement, 39(1), 1-37. https://doi.org/10.1111/j.1745-3984.2002.tb01133.x

License

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.