← Back to papers

Paper deep dive

AI Misuse in Education Is a Measurement Problem: Toward a Learning Visibility Framework

Eduardo Davalos, Yike Zhang

Year: 2026Venue: arXiv preprintArea: cs.CYType: PreprintEmbeddings: 40

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 93%

Last extracted: 3/13/2026, 12:37:12 AM

Summary

The paper addresses the challenge of AI misuse in education by reframing it from a detection problem to a measurement problem. It introduces the 'Learning Visibility Framework,' which advocates for transparency and process-based assessment through three principles: clear specification of AI use, valuing learning processes alongside outcomes, and establishing transparent timelines of student activity to foster trust and ethical integration.

Entities (5)

Learning Visibility Framework · framework · 100%Conversational AI · technology · 98%Cognitive Offloading · concept · 95%Multimodal Learning Analytics · field-of-study · 95%Academic Integrity · concept · 90%

Relation Signals (3)

Learning Visibility Framework addresses AI Misuse

confidence 95% · we propose the Learning Visibility Framework... to address this issue

Learning Visibility Framework incorporates Multimodal Learning Analytics

confidence 90% · The framework emphasizes transparency... drawing on research in... multimodal timeline reconstruction

Conversational AI causes Cognitive Offloading

confidence 85% · we focus specifically on a form of AI misuse characterized by cognitive offloading through conversational agents

Cypher Suggestions (2)

Find all principles associated with the Learning Visibility Framework · confidence 90% · unvalidated

MATCH (f:Framework {name: 'Learning Visibility Framework'})-[:HAS_PRINCIPLE]->(p:Principle) RETURN p.description

Identify technologies linked to educational challenges · confidence 85% · unvalidated

MATCH (t:Technology)-[:CONTRIBUTES_TO]->(c:Challenge) RETURN t.name, c.name

Abstract

Abstract:The rapid integration of conversational AI systems into educational settings has intensified ethical concerns about academic integrity, fairness, and students' cognitive development. Institutional responses have largely centered on AI detection tools and restrictive policies, yet such approaches have proven unreliable and ethically contentious. This paper reframes AI misuse in education not primarily as a detection problem, but as a measurement problem rooted in the loss of visibility into the learning process. When AI enters the assessment loop, educators often retain access to final outputs but lose valuable insight into how those outputs were produced. Drawing on research in cognitive offloading, learning analytics, and multimodal timeline reconstruction, we propose the Learning Visibility Framework, grounded in three principles: clear specification and modeling of acceptable AI use, recognition of learning processes as assessable evidence alongside outcomes, and the establishment of transparent timelines of student activity. Rather than promoting surveillance, the framework emphasizes transparency and shared evidence as foundations for ethical AI integration in classroom settings. By shifting focus from adversarial detection toward process visibility, this work offers a principled pathway for aligning AI use with educational values while preserving trust and transparency between students and educators

Tags

ai-safety (imported, 100%)cscy (suggested, 92%)preprint (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Full Text

39,657 characters extracted from source content.

Expand or collapse full text

AI Misuse in Education Is a Measurement Problem: Toward a Learning Visibility Framework Eduardo Davalos 1[0000−0001−7190−7273] and Yike Zhang 2[0000−0003−3503−2996] 1 Trinity University, San Antonio TX, 78209, USA davalosedu@trinity.edu 2 St. Mary’s University, San Antonio TX, 78228, USA yzhang5@stmarytx.edu Abstract. The rapid integration of conversational AI systems into ed- ucational settings has intensified ethical concerns about academic in- tegrity, fairness, and students’ cognitive development. Institutional re- sponses have largely centered on AI detection tools and restrictive poli- cies, yet such approaches have proven unreliable and ethically contentious. This paper reframes AI misuse in education not primarily as a detection problem, but as a measurement problem rooted in the loss of visibility into the learning process. When AI enters the assessment loop, educa- tors often retain access to final outputs but lose valuable insight into how those outputs were produced. Drawing on research in cognitive offload- ing, learning analytics, and multimodal timeline reconstruction, we pro- pose the Learning Visibility Framework, grounded in three princi- ples: clear specification and modeling of acceptable AI use, recognition of learning processes as assessable evidence alongside outcomes, and the es- tablishment of transparent timelines of student activity. Rather than pro- moting surveillance, the framework emphasizes transparency and shared evidence as foundations for ethical AI integration in classroom settings. By shifting focus from adversarial detection toward process visibility, this work offers a principled pathway for aligning AI use with educa- tional values while preserving trust and transparency between students and educators. Keywords: AI in Education· Learning Visibility· Cognitive Offloading · Learning Analytics· AI Ethics 1 Introduction The release of ChatGPT in 2021 marked a turning point in education. Suddenly, the rapid adoption of the technology created widespread uncertainty among stu- dents and educators regarding the near-immediate transformation of teaching, learning, and assessment practices. Although early studies have begun to exam- ine both short-term and long-term effects of conversational AI tools in educa- tional contexts [3,7,10,11], it remains premature to determine the full extent of their impact on student learning and academic development. arXiv:2603.07834v1 [cs.CY] 8 Mar 2026 2Davalos et al. In the early stages of this technological shift, many educators turned to AI detection software, similar in spirit to plagiarism detection systems, in an effort to preserve academic integrity [28]. However, these tools introduced new ethi- cal and practical concerns. False positives led to disputes, reputation harm, and staining tensions between students and instructors [19,20]. Beyond raising ques- tionable accusations of academic misconduct, the limitations and inaccuracies of AI detection systems erode trust in institutional standing. It became increas- ingly clear that AI detection tools are neither sufficiently reliable nor ethically robust for high-stakes academic misconduct determinations. [12]. Despite the shortcomings of many detection-based approaches, educators have limited alternatives for spotting and addressing AI misuse among students. Without effective mechanisms for observing and guiding students’ AI use, teach- ers face difficulty in helping students develop responsible, self-regulated learning practices alongside AI tools. As illustrated in Fig. 1, the core issue is not merely detection, but visibility of the learning process. When AI enters the assessment process, learning becomes opaque. Educators lose valuable insight into how stu- dents develop their final answers, making it difficult to distinguish productive AI-supported learning from harmful cognitive offloading. The absence of reliable Fig. 1. Learning Visibility Problem: When assessment relies primarily on observ- able outcomes such as grades or time spent, the underlying learning process remains opaque. The student’s cognitive and meta-cognitive activity becomes a “black box,” limiting educators’ ability to distinguish productive AI-supported learning from harm- ful cognitive offloading. evidence regarding students’ learning processes has broader consequences. In- structors may suspect AI misuse but lack sufficient documentation or evidence to uphold academic integrity policies. This situation enables unchecked and unreg- ulated misuse in the classroom setting, where low-quality AI-generated outputs are submitted without meaningful engagement. Furthermore, when AI-generated work receives equal or higher grades than human-produced work, a feedback loop emerges that encourages greater reliance on AI tools. Over time, this dynamic risks reshaping academic norms to privilege output over understanding. In this paper, we focus specifically on a form of AI misuse characterized by cognitive offloading through conversational agents. Students may delegate sub- stantial portions of their coursework to AI systems and submit generated outputs Ai Misuse in Education is a Measurement Problem3 with minimal oversight, reflection, or verification. Prior research on cognitive of- floading suggests that excessive reliance on external systems can lead to shallow understanding and reduced opportunities to engage in higher-order cognitive and meta-cognitive processes [24,18,15]. Experimental studies have shown that when AI support is removed, students who previously relied heavily on such tools may struggle and perform worse than control groups without AI assistance [13]. These findings raise concerns about long-term skill development and conceptual mastery in course materials. Yet, AI use is not inherently detrimental. AI tools can provide scaffolding, feedback, and support when used appropriately. The critical distinction lies in whether AI supplements cognition or replaces it. Ideally, AI should support meta- cognitive processes by encouraging reflection, planning, and evaluation, rather than substituting for core cognitive effort. However, ensuring such constructive use presents significant challenges in pedagogical design and development. The proliferation of AI tools across digital learning environments further complicates the situation. Many platforms integrate AI assistance directly into workflows, and disabling these features is often impractical or not feasible. Additionally, the broader transition from paper-based to browser-based assessments has increased the accessibility and temptation of AI misuse. In response, some educators have reverted to in-person paper examinations using bluebooks to minimize AI inter- ference. While this strategy reduces immediate risks, it reintroduces logistical burdens, environmental costs, and grading inefficiencies. Rather than simply framing the problem solely as one of detection or prohi- bition, we argue that the AI misuse is fundamentally a measurement problem. The central challenge is how to make learning processes visible in AI-assisted environments. To address this issue, we propose a visibility-focused framework grounded in three principles (P1-3) designed to support ethical, transparent, and pedagogically sound AI integration in classrooms. – P1: Clear specification, modeling, and disclosure of valid and invalid AI use in each assessment. – P2: Recognition that both learning outcomes and learning processes, in- cluding student actions, revisions, and methods, are essential components of assessment. – P3: Establishment of a transparent timeline of student actions and records to serve as shared evidence of learning. In this paper, we elaborate on these three principles, relate them to existing pedagogical and technological practices, and discuss potential implementations to promote fairness, transparency, and responsible AI integration in education. 2 Background This section reviews prior research on AI misuse in education, cognitive offload- ing, and the role of learning analytics in understanding the learning process. We draw from empirical studies across programming, writing, and neuroscience 4Davalos et al. to situate the problem of AI-assisted learning within broader discussions of ev- idence, measurement, and pedagogical design. Recent empirical work has ex- amined how AI-assisted learning environments influence both students’ perfor- mance and understanding. In a quasi-experimental study of 151 first-year com- puter science students, [33] found substantial short-term gains in AI-assisted programming tasks, yet weak transfer to independent problem-solving contexts. These findings suggest that performance improvements do not necessarily trans- late into long-term learning. Qualitative reflections indicated that students were often aware of gaps in their understanding but continued to rely heavily on AI systems. Drawing on Bloom’s taxonomy and cognitive offloading theory, the authors argue that sustained AI reliance may reduce higher-order cognitive en- gagement and algorithmic reasoning. Rather than portraying AI assistance as inherently harmful, the study calls for structured pedagogical integration that promotes meta-cognitive reflection and learner autonomy. Similar findings have emerged in writing contexts. In studies of AI-assisted essay composition, [1] observed that students who relied extensively on AI- generated text exhibited a superficial understanding of their own submissions. Participants often struggled to explain arguments, justify structural decisions, or recall specific claims within their essays. These outcomes suggest that ex- cessive AI reliance may reduce opportunities for deep processing and overall knowledge construction. Beyond behavioral and performance measures, emerg- ing neuroscience-based research provides further insight into the cognitive effects of AI-assisted learning. In an electroencephalography study of essay writing, [21] compared participants using large language models (LLMs), search engines, and no external tools. Across sessions, LLM users exhibited weaker and less dis- tributed brain connectivity than Brain-only participants, with cognitive activity scaling down in relation to external tool use. LLM users also reported lower own- ership of their work and demonstrated difficulty recalling or accurately quoting their own essays. Although such findings require careful interpretation, they sug- gest that sustained reliance on LLMs may be associated with reduced cognitive engagement and diminished internalization of knowledge. Together, these neural and behavioral patterns reinforce concerns that AI-mediated task completion can alter the intensity and distribution of cognitive effort during learning sessions. Together, these studies highlight the need to distinguish between ethical AI use and harmful cognitive substitution. They also underscore a central challenge for educators: determining how to observe and measure the quality of student engagement when AI tools are involved. A critical component of monitoring AI use is the concept of showing evidence. In educational contexts, evidence consists of observable data points that substantiate claims about student understanding and effort. Such evidence can include data artifacts such as revision timestamps, intermediate problem-solving steps, code iteration histories, or interaction logs. When integrated with learning analytics techniques, these artifacts can provide insight into the processes that lead to a final submission. The importance of in- tegrating analytics into pedagogy has been emphasized in frameworks that posi- tion teachers as active interpreters of learning data. For example, [34] describes a Ai Misuse in Education is a Measurement Problem5 teaching with analytics model in which analytics are not merely descriptive dash- boards but integral components of instructional decision-making. In this view, teachers gather, interpret, and act upon evidence generated through student in- teractions with learning systems. As illustrated in Fig. 2, analytics-supported formative feedback cycles can inform summative assessment and adaptive in- struction. Despite this progress, many current learning analytics systems remain Fig. 2. Teacher and Student Feedback Cycles: The relationship between these two interlinked cycles of intervention, analytics, and feedback. The inner cycle is com- posed of formative assessments while the outer cycle illustrates the summative cycle of learning analytics that aids teacher’s adaptive instruction and assessment. limited in scope. Commercial and institutional platforms often provide unimodal log-based indicators such as grades, time spent, and login frequency [5]. While useful for identifying broad participation trends, these measures capture only a narrow portion of the learning process. They do not adequately reflect the com- plexity of cognitive, meta-cognitive, and social dynamics involved in authentic learning tasks. To address these limitations, the field of multimodal learning analytics has emerged. Multimodal approaches integrate diverse data sources, including in- teraction logs [6,26], gaze tracking [8,9], affect [30], posture [23], speech [25], physiological signals [31], and collaborative behaviors [27], to build richer mod- els of learner engagement and self-regulation [29,36,4]. Compared to simple log 6Davalos et al. data, multimodal systems have demonstrated improved sensitivity in identifying patterns of cognition, collaboration, and effort. Importantly, multimodal learning analytics has also advanced the reconstruction and visualization of learning time- lines [14]. Timeline-based representations organize heterogeneous data streams into coherent temporal narratives of student activity. For example, [17] devel- oped a timeline visualization tool for collaborative embodied STEM learning that combined system logs with social signal data such as gaze direction, pos- ture, spatial position, and speech. Similarly, [22] implemented timeline analytics in nursing simulation training, enabling instructors to review student nurses’ actions throughout complex scenarios and provide holistic, evidence-based feed- back. Across these contexts, timeline representations serve a common purpose: they transform dispersed interaction traces into interpretable evidence of human ac- tion. The resulting data artifacts are generated through students’ observable behaviors and therefore provide indirect insight into underlying cognitive and meta-cognitive processes. In AI-assisted learning environments, timeline-based evidence can help restore visibility into otherwise opaque learning processes. 3 Learning Visibility Framework This section introduces the three core principles of the Learning Visibility Framework and explains why they are essential for communicating expecta- tions, monitoring engagement, and regulating AI use in educational settings. Rather than focusing on enforcement or prohibition, the framework emphasizes transparency, shared understanding, and measurable evidence of learning pro- cesses. P1: Clear Specification and Modeling of Valid and Invalid AI Use The first principle focuses on communication and transparency of expectations between students and educators. In the wake of widespread concerns about AI misuse and false accusations, maintaining bidirectional trust has become increas- ingly important. Ambiguity regarding acceptable AI use erodes confidence on both sides. When policies are unclear or inconsistently enforced, misunderstand- ings multiply and the teacher-student partnership deteriorates. From the edu- cator’s perspective, it is essential to clearly define and model acceptable AI use within each assessment. Modeling extends beyond listing permitted or prohibited tools, as it includes demonstrating appropriate prompts, responsible workflows, and reflective practices that align with learning objectives. By explicitly stating valid and invalid uses of AI, instructors can reduce unintentional misuse and establish shared norms. As illustrated in Fig. 3, clear guidelines combined with example use cases and open dialogue transform policy statements into shared expectations. The teacher–student exchange described in the figure emphasizes that transparency is not merely declarative, but conversational and iterative. Ai Misuse in Education is a Measurement Problem7 Fig. 3. P1: Clear Specification and Modeling of AI Use. Explicit guidelines and example use cases, reinforced through open teacher–student dialogue, establish shared expectations and reduce ambiguity around acceptable AI use. Effective modeling should center on cognitive and meta-cognitive develop- ment. AI systems should not remove human autonomy or replace essential cogni- tive effort. Instead, they should support planning, reflection, and self-evaluation. The appropriate boundary depends on the instructional context. In introductory programming assignments, where the primary objective is to develop founda- tional syntax understanding and problem-solving skills, instructors may prohibit AI-generated code entirely. In contrast, in writing-intensive courses, AI tools may be allowed for brainstorming or high-level feedback, as long as students produce and revise their own substantive work. A notable challenge is that widely used conversational agents are frequently treated as tutors, yet they are not designed with pedagogical guardrails that reliably prevent over-scaffolding or solution disclosure. Without structured guid- ance, these systems may provide answers rather than promote content under- standing. Human instructors can make similar mistakes when they offer complete solutions instead of scaffolding reasoning. The framework therefore emphasizes that modeling responsible AI use requires intentional instructional design, not merely access to technology. P2: Valuing Both Learning Outcomes and Learning Processes The second principle shows that both outcomes and processes are essential com- ponents of assessment. Historically, formative and summative evaluations have relied primarily on final products such as completed exams, essays, projects, or problem sets. The finished artifact served as the primary measurement instru- ment for judging student proficiency and course materials mastery. However, AI systems can generate polished outputs that resemble genuine work, making it increasingly difficult to distinguish between authentic learning and delegated production based solely on final submissions. To address this limitation, assessment practices must shift from outcome-only evaluation toward incorporating process-based evidence. The learning process it- self should be considered as a measurable and assessable dimension of student performance. Observable behaviors such as revision patterns, iteration histories, 8Davalos et al. Fig. 4. P2: Learning Process as Evidence. Revision traces, insertions, and dele- tions provide visible records of student activity that allow educators to interpret effort, reasoning, and potential AI involvement beyond the final submitted product. intermediate drafts, and problem-solving steps provide insight into engagement and effort. As illustrated in Fig. 4, visible traces of edits, insertions, and deletions transform the essay from a static product into a record of evolving reasoning. Process data, such as essay revisions, enable educators to ask informed questions and distinguish constructive effort from AI delegation. Some tools have begun to support this transition. For example, revision history tracking in collabora- tive writing platforms enables instructors to examine how students develop and refine their ideas over time [32]. These tools provide temporal records that con- textualize final submissions. However, equivalent visibility mechanisms are less common in domains such as mathematics, computer science, physics, and other problem-solving disciplines. The absence of comprehensive process-tracking in- frastructure limits the scalability of visibility-based assessment across curricula. Process visibility serves not only as a deterrent to misuse but also as a peda- gogical resource. Access to the learning process data can address misconceptions, inefficient strategies, and gaps in understanding. Importantly, the interpretation of such evidence should remain the responsibility of human educators. While analytics systems can summarize or visualize behavioral data, the ethical and contextual judgment required to identify misuse or academic dishonesty must not be delegated entirely to automated systems. Retaining educator oversight preserves fairness and reduces the risk of algorithmic mis-classification. P3: Establishing a Transparent Timeline of Learning Activity The third principle emphasizes the importance of constructing a transparent timeline of student activity as a shared artifact within the teacher-student part- nership. A timeline organizes discrete interaction traces into a coherent temporal narrative of learning. This representation functions as both a record of engage- ment and a tool for dialogue. As shown in Fig. 5, mapping student actions across time reveals how drafting, AI interaction, and revision unfold within a single as- sessment. This temporal structure enables educators to contextualize AI use rather than judge isolated edits or final submissions. Timeline-based evidence aligns with developments in multimodal learning analytics, where diverse data sources are integrated to reconstruct sequences Ai Misuse in Education is a Measurement Problem9 Fig. 5. P3: Transparent Timeline of Learning Activity. A temporal record of student actions reveals sequences of drafting, AI interaction, and revision, enabling educators to contextualize AI use and engage students in reflective dialogue about their decision-making process. of cognitive and behavioral events [35,16]. Such approaches have demonstrated applicability across varied domains and learning environments. A structured timeline enables educators to observe patterns such as abrupt content inser- tion, prolonged inactivity followed by rapid completion, or iterative refinement consistent with sustained effort. Beyond identifying misuse, the timeline supports formative feedback. By re- viewing sequences of actions, instructors can pinpoint moments where miscon- ceptions arise or strategies falter. Students, in turn, gain opportunities to reflect on their workflows and decision-making processes. The timeline thus becomes not merely a surveillance mechanism but a collaborative reference point for dis- cussion and improvement. Integrating the Three Principles Taken together, the three principles establish a coherent framework for address- ing AI misuse without defaulting to adversarial detection systems. Clear commu- nication and modeling of acceptable AI use provide normative guidance. Valu- ing learning processes alongside outcomes expands the evidentiary basis of as- sessment. Transparent timelines transform interaction traces into interpretable artifacts that support dialogue and overall accountability. When implemented collectively, these principles aim to reduce distrust and promote fairness. Rather than framing AI use as inherently suspect, the framework positions visibility and shared evidence as the foundation for ethical integration. By making goals, processes, and expectations transparent, educators and students can move to- ward consensus-driven practices that assist learning while preserving academic integrity. 4 Challenges, Limitations, and Future Work While the Learning Visibility Framework offers a structured approach to addressing AI misuse, several important challenges and limitations must be ac- knowledged. 10Davalos et al. A primary concern involves student privacy. Systems designed to capture and analyze process data necessarily collect detailed records of student actions during assessments. Without careful design, such data collection risks infringing upon student autonomy and confidentiality. Educational technology platforms and instructors must therefore transparently disclose what data are collected, how they are used, and who has access to them. Informed consent and clear dis- closure mechanisms are essential. Moreover, stakeholders must adopt principles of data minimization, collecting only what is necessary to support pedagogical goals. Ethical implementation requires that process visibility not become dispro- portionate surveillance in practice. Protecting student safety and dignity must remain a foundational design principle and constraint. A second challenge concerns the volume and complexity of process data. Detailed interaction traces can quickly accumulate into large and cognitively de- manding datasets. Many existing learning analytics dashboards already present instructors with dense visualizations that are difficult to interpret holistically. If poorly designed, visibility systems may increase instructor workload rather than reduce uncertainty. Future work therefore should focus on human-centered interface design that supports efficient interpretation without cognitive over- load. This includes prioritizing meaningful patterns over raw logs, providing layered summaries with drill-down capabilities, and aligning analytics displays with instructional decision-making processes. Effective visibility should enhance pedagogical insight rather than overwhelm educators with excessive detail. A further limitation involves the inevitability of circumvention. Students who are highly motivated to evade detection may adapt their behavior to bypass vis- ibility mechanisms. For instance, instead of copy-pasting AI-generated text, a student might manually transcribe it to create the appearance of authentic revi- sion history. However, such adversarial strategies are not unique to AI-enabled environments. Academic dishonesty has long included external assistance such as paid writing services or unauthorized collaboration [2]. The proposed framework does not aim to eliminate all forms of deliberate cheating. Overall, the framework primarily targets widespread, low-reflection misuse that arises from ambiguity, convenience, or lack of guidance. By emphasizing transparency, shared evidence, and pedagogical alignment, the goal is to reduce unintentional or habitual cognitive offloading. Future research should empirically evaluate whether visibility-based approaches can meaningfully guide student be- havior, improve meta-cognitive engagement, and maintain trust while respecting ethical boundaries. Taken together, these challenges highlight that visibility is not a purely technical solution. It is a socio-technical intervention that must balance privacy, usability, fairness, and instructional intent. Continued interdis- ciplinary collaboration among educators, researchers, and technology designers is necessary to refine and responsibly implement this framework. Ai Misuse in Education is a Measurement Problem11 5 Conclusion The swift integration of conversational AI systems into educational settings has intensified concerns about academic integrity, cognitive development, and fair- ness. Initial responses have focused on detection and prohibition, yet these ap- proaches have proven unreliable and ethically problematic. This paper reframes AI misuse not as a detection problem, but as a measurement problem. When AI enters the assessment loop, the central loss is visibility into the learning process. Drawing on research in cognitive offloading and learning analytics, we argue that responsible AI integration requires advocating transparency to student ac- tivity. We introduce the Learning Visibility Framework, grounded in three principles: clear specification and modeling of acceptable AI use, recognition of learning processes as assessable evidence alongside outcomes, and the establish- ment of transparent timelines of student actions. Together, these principles shift the focus from adversarial enforcement toward shared evidence and pedagogical clarity. Rather than attempting to eliminate all types of academic misconduct, the framework seeks to reduce ambiguity, support meta-cognitive engagement, and preserve trust between students and educators. Visibility is not a surveillance mechanism but a socio-technical strategy for aligning ethical AI use with long- standing educational values and well-grounded learning theory. As AI systems continue to evolve, institutions must move beyond reactive enforcement toward principled integration. Ultimately, the ethical use of AI in education depends not only on what students produce, but on how they actively learn. AI Use ChatGPT was used for text editing and formatting of the manuscript. The authors have manually inspected, corrected, and verified the generated text and take full responsibility of the manuscript’s claims and content. Disclosure of Interests. The authors have no competing interests to declare that are relevant to the content of this article. References 1. Ahmedtelba, E.: Critical integration of generative ai in higher education: Cognitive, pedagogical, and ethical perspectives. London Journal of Research In Humanities and Social Sciences 25(13), 1–12 (Sep 2025), https://journalspress.uk/index. php/LJRHSS/article/view/1601 2. Amigud, A., Lancaster, T.: 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education 134, 98–107 (2019). https://doi.org/https://doi.org/10.1016/j.compedu.2019.01.017, https://w.sciencedirect.com/science/article/pii/S0360131519300235 3. Beckingham, S., Lawrence, J., Powell, S., Hartley, P.: Using Generative AI Effec- tively in Higher Education: Sustainable and Ethical Practices for Learning, Teach- ing and Assessment. Taylor & Francis (2024) 12Davalos et al. 4. Blikstein, P., Worsley, M.: Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. Jour- nal of Learning Analytics 3(2), 220–238 (Sep 2016). https://doi.org/10.18608/ jla.2016.32.11, https://learning-analytics.info/index.php/JLA/article/ view/4383 5. Cohn, C., Davalos, E., Vatral, C., Fonteles, J.H., Wang, H.D., Ma, M., Biswas, G.: Multimodal methods for analyzing learning and training environments: A system- atic literature review (2024), https://arxiv.org/abs/2408.14491 6. Cohn, C., Hutchins, N., Le, T., Biswas, G.: A chain-of-thought prompting approach with llms for evaluating students’ formative assessment responses in science. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 38, p. 23182– 23190 (2024) 7. Cukurova, M., Kent, C., Luckin, R.: Artificial intelligence and multimodal data in the service of human decision-making: A case study in debate tu- toring. British Journal of Educational Technology 50(6), 3032–3046 (Nov 2019). https://doi.org/10.1111/bjet.12829, https://onlinelibrary.wiley. com/doi/10.1111/bjet.12829 8. Davalos, E., Srivastava, N., Zhang, Y., Goodwin, A., Biswas, G.: GazeViz: A web-based approach for visualizing learner gaze patterns in online educa- tional environment (2024). https://doi.org/10.58459/icce.2024.4974, https: //library.apsce.net/index.php/ICCE/article/view/4974 9. Davalos, E., Vatral, C., Cohn, C., Horn Fonteles, J., Biswas, G., Mohammed, N., Lee, M., Levin, D.: Identifying Gaze Behavior Evolution via Temporal Fully- Weighted Scanpath Graphs. In: ACM International Conference Proceeding Series. p. 476–487. Association for Computing Machinery, Arlington, TX, USA (3 2023). https://doi.org/10.1145/3576050.3576117 10. Davalos, E., Zhang, Y., Srivastava, N., Salas, J.A., McFadden, S., Cho, S.J., Biswas, G., Goodwin, A.: Llms as educational analysts: Transforming multimodal data traces into actionable reading assessment reports. In: Cristea, A.I., Walker, E., Lu, Y., Santos, O.C., Isotani, S. (eds.) Artificial Intelligence in Education. p. 191–204. Springer Nature Switzerland, Cham (2025) 11. Davar, N.F., Dewan, M.A.A., Zhang, X.: AI Chatbots in Education: Challenges and Opportunities. Information 16(3), 235 (Mar 2025). https://doi.org/10. 3390/info16030235, https://w.mdpi.com/2078-2489/16/3/235 12. Deep, P.D., Edgington, W.D., Ghosh, N., Rahaman, M.S.: Evaluating the effec- tiveness and ethical implications of ai detection tools in higher education. Infor- mation 16(10) (2025). https://doi.org/10.3390/info16100905, https://w. mdpi.com/2078-2489/16/10/905 13. Doleck, T., Agand, P., Pirrotta, D.: Generative ai in data science programming: Dif- ferences in performance between groups with and without ai-assistance. Education and Information Technologies 30(18), 26857–26875 (Dec 2025). https://doi.org/ 10.1007/s10639-025-13801-4, https://doi.org/10.1007/s10639-025-13801-4 14. Echeverria, V., Martinez-Maldonado, R., Granda, R., Chiluiza, K., Conati, C., Buckingham Shum, S.: Driving data storytelling from learning design. In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge. p. 131–140. ACM, Sydney New South Wales Australia (Mar 2018). https://doi.org/10.1145/3170358.3170380, https://dl.acm.org/doi/ 10.1145/3170358.3170380 15. Fitriani, A.N., Risaldy, R., Rauf, A., Afidatunisa, S.: Academic dependency, ai literacy, and cognitive offloading predict students’ cognitive ability in generative ai Ai Misuse in Education is a Measurement Problem13 learning. Artificial Intelligence in Lifelong and Life-Course Education 1(2), 53–66 (2026) 16. Fonteles, J., Davalos, E., Ashwin, T.S., Zhang, Y., Zhou, M., Ayalon, E., Lane, A., Steinberg, S., Anton, G., Danish, J., Enyedy, N., Biswas, G.: A first step in using machine learning methods to enhance interaction analysis for embodied learning environments. In: Olney, A.M., Chounta, I.A., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds.) Artificial Intelligence in Education. p. 3–16. Springer Nature Switzer- land, Cham (2024) 17. Fonteles, J.H., Cohn, C., Ayalon, E., Zhou, M., T.S., A., Davalos, E., Li, Z., Rayala, S., Mereddy, D., Coursey, A., Jain, S., Zhang, Y., Enyedy, N., Danish, J., Biswas, G.: Analyzing embodied learning in classroom set- tings: A human-in-the-loop ai approach for multimodal learning analytics. Learning and Instruction 103, 102274 (2026). https://doi.org/https://doi. org/10.1016/j.learninstruc.2025.102274, https://w.sciencedirect.com/ science/article/pii/S0959475225001987 18. Gerlich, M.: Ai tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies 15(1) (2025). https://doi.org/10.3390/soc15010006, https://w.mdpi.com/2075-4698/15/1/6 19. Gorichanaz, T.: Accused: How students respond to allegations of using chatgpt on assessments. Learning: Research and Practice 9(2), 183–196 (2023). https://doi.org/10.1080/23735082.2023.2254787, https://doi.org/ 10.1080/23735082.2023.2254787 20. King, A., Biri, A.K., Arilin, A., Lenand, M., Traigo, M., Jaji, G., Salasain, R., Al-Basheer, Y., Abdilla: Ai dilemma in schoolwork: Student anxiety and fairness perceptions in ai schoolwork accusations. Journal of Education and Practice 16, 1–15 (10 2025). https://doi.org/10.7176/JEP/16-11-01 21. Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X.H., Beresnitzky, A.V., Braunstein, I., Maes, P.: Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task (2025), https://arxiv.org/abs/ 2506.08872 22. Martinez-Maldonado, R., Echeverria, V., Fernandez Nieto, G., Buckingham Shum, S.: From Data to Insights: A Layered Storytelling Approach for Multimodal Learning Analytics. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. p. 1–15. ACM, Honolulu HI USA (Apr 2020). https://doi.org/10.1145/3313831.3376148, https://dl.acm.org/doi/ 10.1145/3313831.3376148 23. Mitri, D.D.: Detecting Medical Simulation Errors with Machine learning and Mul- timodal Data 24. Moqbel, D.M.: Impact of ai-based personalized english learning on cognitive offloading and formal curriculum. Journal of English Studies in Arabia Felix 4(2), 50–64 (Dec 2025). https://doi.org/10.56540/jesaf.v4i2.127, https:// journals.arafa.org/index.php/jesaf/article/view/127 25. Munoz, R., Villarroel, R., Barcelos, T.S., Souza, A., Merino, E., Guiñez, R., Silva, L.A.: Development of a Software that Supports Multimodal Learning Analytics: A Case Study on Oral Presentations 26. Munshi, A., Biswas, G., Davalos, E., Logan, O., Rushdy, M.: Adaptive Scaffolding to Support Strategic Learning in an Open-Ended Learning Environment. Interna- tional Conference on Computers in Education (Nov 2022) 27. Nasir, J., Kothiyal, A., Bruno, B., Dillenbourg, P.: Many are the ways to learn iden- tifying multi-modal behavioral profiles of collaborative learning in constructivist 14Davalos et al. activities. International Journal of Computer-Supported Collaborative Learn- ing 16(4), 485–523 (Dec 2021). https://doi.org/10.1007/s11412-021-09358-2, https://link.springer.com/10.1007/s11412-021-09358-2 28. Njee, L.C.: Online Professors’ Experience With Students Misusing Artificial In- telligence (AI) in Higher Education: An Exploratory Case Study. Ph.D. thesis, National University (2025) 29. Ochoa, X., Lang, C., Siemens, G., Wise, A., Gasevic, D., Merceron, A.: Multimodal learning analytics-rationale, process, examples, and direction. The handbook of learning analytics p. 54–65 (2022) 30. Pham, P., Wang, J.: Predicting Learners’ Emotions in Mobile MOOC Learning via a Multimodal Intelligent Tutor. In: Nkambou, R., Azevedo, R., Vassileva, J. (eds.) Intelligent Tutoring Systems, vol. 10858, p. 150–159. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-91464-0_15, http://link.springer.com/10.1007/978-3-319-91464-0_15, series Title: Lec- ture Notes in Computer Science 31. Tamura, K., Lu, M., Konomi, S., Hatano, K., Inaba, M., Oi, M., Okamoto, T., Okubo, F., Shimada, A., Wang, J., Yamada, M., Yamada, Y.: Integrat- ing Multimodal Learning Analytics and Inclusive Learning Support Systems for People of All Ages. In: Rau, P.L.P. (ed.) Cross-Cultural Design. Cul- ture and Society, vol. 11577, p. 469–481. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-22580-3_35, http://link. springer.com/10.1007/978-3-030-22580-3_35, series Title: Lecture Notes in Computer Science 32. Türkay, S., Seaton, D., Ang, A.M.: Itero: A revision history analytics tool for ex- ploring writing behavior and reflection. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. p. 1–6. CHI EA ’18, Asso- ciation for Computing Machinery, New York, NY, USA (2018). https://doi.org/ 10.1145/3170427.3188474, https://doi.org/10.1145/3170427.3188474 33. Vivian, R.: Coding with chatgpt: Empirical evidence of cognitive offloading in computer science education. Clareus Scientific Science and Engineering (2025). https://doi.org/10.70012/CSSE.02.062 34. Wise, A.F., Jung, Y.: Teaching with Analytics: Towards a Situated Model of Instructional Decision-Making. Journal of Learning Analyt- ics 6(2) (Jul 2019). https://doi.org/10.18608/jla.2019.62.4, https: //learning-analytics.info/index.php/JLA/article/view/6357 35. Worsley, M., Abrahamson, D., Blikstein, P., Grover, S., Schneider, B., Tissenbaum, M.: Situating Multimodal Learning Analytics p. 5 (2016) 36. Worsley, M., Blikstein, P.: A Multimodal Analysis of Making. International Jour- nal of Artificial Intelligence in Education 28(3), 385–419 (Sep 2018). https: //doi.org/10.1007/s40593-017-0160-1, http://link.springer.com/10.1007/ s40593-017-0160-1