Jing Liu

Assistant Professor in Education Policy

M-Powering Teachers: Natural Language Processing Powered Feedback Improves 1:1 Instruction and Student Outcomes


Conference paper


Dorottya Demszky, Jing Liu
L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale, 2023 , pp. 59-69


View PDF
Cite

Cite

APA   Click to copy
Demszky, D., & Liu, J. (2023). M-Powering Teachers: Natural Language Processing Powered Feedback Improves 1:1 Instruction and Student Outcomes (pp. 59–69). L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale. https://doi.org/10.1145/3573051.3593379


Chicago/Turabian   Click to copy
Demszky, Dorottya, and Jing Liu. “M-Powering Teachers: Natural Language Processing Powered Feedback Improves 1:1 Instruction and Student Outcomes.” In , 59–69. L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale, 2023.


MLA   Click to copy
Demszky, Dorottya, and Jing Liu. M-Powering Teachers: Natural Language Processing Powered Feedback Improves 1:1 Instruction and Student Outcomes. L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale, 2023, pp. 59–69, doi:10.1145/3573051.3593379.


BibTeX   Click to copy

@inproceedings{dorottya2023a,
  title = {M-Powering Teachers: Natural Language Processing Powered Feedback Improves 1:1 Instruction and Student Outcomes},
  year = {2023},
  month = dec,
  pages = {59-69},
  publisher = {L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale},
  doi = {10.1145/3573051.3593379},
  author = {Demszky, Dorottya and Liu, Jing},
  month_numeric = {12}
}

Although learners are being connected 1:1 with instructors at an increasing scale, most of these instructors do not receive effective, consistent feedback to help them improve. We deployed M-Powering Teachers, an automated tool based on natural language processing to give instructors feedback on dialogic instructional practices ---including their uptake of student contributions, talk time and questioning practices --- in a 1:1 online learning context. We conducted a randomized controlled trial on Polygence, a research mentorship platform for high schoolers (n=414 mentors) to evaluate the effectiveness of the feedback tool. We find that the intervention improved mentors' uptake of student contributions by 10%, reduced their talk time by 5% and improved student's experience with the program as well as their relative optimism about their academic future. These results corroborate existing evidence that scalable and low-cost automated feedback can improve instruction and learning in online educational contexts.