← Back to papers

Paper deep dive

Intellectual Stewardship: Re-adapting Human Minds for Creative Knowledge Work in the Age of AI

Jianwei Zhang

Year: 2026Venue: arXiv preprintArea: cs.CYType: PreprintEmbeddings: 81

Abstract

Abstract:Background: Amid the opportunities and risks introduced by generative AI, learning research needs to envision how human minds and responsibilities should re-adapt as AI continues to augment or automate various tasks. Approach: Drawing on theories of learning, intelligence, and knowledge creation, this conceptual paper proposes intellectual stewardship as a human-centered, conceptually grounded framework for advancing creative learning practices with AI. Key points: Students and teachers work as responsible governors of intellectual processes distributed across human and artificial systems, guided by five core principles. Being knowledge-wise involves understanding the evolving state of knowledge and taking purposeful actions to advance it. Being intelligence-wise emphasizes making informed choices about how to orchestrate distributed cognitive processes and resources. Being context-wise requires sensitivity to recognize opportunities and risks. Being ethics-wise foregrounds ethical judgment, responsibility, and care in the use of knowledge and intellectual power. Finally, self- and community-growing defines the overarching purpose, aligning intellectual work with personal development and the advancement of collective well-being. Contribution: The principles provide a lens for viewing the adaptation of human minds in AI-infused learning environments, calling for the development of meta-level dispositions and capabilities that characterize wisdom-oriented, socially responsible knowledge builders in the AI age.

Tags

ai-safety (imported, 100%)cscy (suggested, 92%)preprint (suggested, 88%)

Links

Your browser cannot display the PDF inline. Open PDF directly →

Intelligence

Status: not_run | Model: - | Prompt: - | Confidence: 0%

Entities (0)

No extracted entities yet.

Relation Signals (0)

No relation signals yet.

Cypher Suggestions (0)

No Cypher suggestions yet.

Full Text

80,252 characters extracted from source content.

Expand or collapse full text

1 Intellectual Stewardship: Re-adapting Human Minds for Creative Knowledge Work in the Age of AI *Note: This is a preprint. Jianwei Zhang Department of Educational Theory and Practices, University at Albany, State University of New York, Albany, NY jzhang1@albany.edu Abstract Background: Amid the opportunities and risks introduced by generative AI, learning research needs to envision how human minds and responsibilities should re-adapt as AI continues to augment or automate various tasks. Approach: Drawing on theories of learning, intelligence, and knowledge creation, this conceptual paper proposes intellectual stewardship as a human-centered, conceptually grounded framework for advancing creative learning practices with AI. Key points: Students and teachers work as responsible governors of intellectual processes distributed across human and artificial systems, guided by five core principles. Being knowledge-wise involves understanding the evolving state of knowledge and taking purposeful actions to advance it. Being intelligence-wise emphasizes making informed choices about how to orchestrate distributed cognitive processes and resources. Being context-wise requires sensitivity to recognize opportunities and risks. Being ethics-wise foregrounds ethical judgment, responsibility, and care in the use of knowledge and intellectual power. Finally, self- and community-growing defines the overarching purpose, aligning intellectual work with personal development and the advancement of collective well-being. Contribution: The principles provide a lens for viewing the adaptation of human minds in AI-infused learning environments, calling for the development of meta-level dispositions and capabilities that characterize wisdom-oriented, socially responsible knowledge builders in the AI age. Keywords: AI in education, Creative learning, Epistemic agency, Human-AI collaboration, Intellectual stewardship, Knowledge building Introduction As generative AI (GenAI) rapidly transforms how people process information, solve problems, communicate with one another, and get work done across social sectors and industries, education faces an urgent demand to reform its century-old model and develop AI-ready learners, citizens, and future workers (McKinsey & Company, 2025; Miao & Holmes, 2023; U.S. Department of Education, 2023; World Economic Forum, 2026). So far, educational uses of AI have centered on optimizing student mastery of predefined knowledge and skills within existing instructional frameworks. This conceptual paper seeks to advance a different vision rooted in learning sciences research: harnessing AI to enable more creative learning practices in which students work as collaborative knowledge builders to investigate authentic challenges and advance knowledge that benefits their own communities and the broader world (Bereiter & 2 Scardamalia, 2025; Linn et al., 2025; Paavola & Hakkarainen, 2005; Penuel et al., 2025; Resnick, 2024). While AI technologies have shown a compelling potential to support creative knowledge work, they also intensify concerns about student agency, responsibility, integrity, and safety (Burns et al., 2026; Kasneci et al., 2023; Wu et al., 2025). Much of the educational response has focused on improving AI use: how to prompt AI effectively, evaluate AI-generated outputs, or avoid plagiarism. These efforts are important, but they largely frame AI as tools to be used or risks to be mitigated. What is missing is a broader, coherent account of what and how people should learn in the age of AI. More specifically, how should human minds and responsibilities re-adapt as AI continues to augment or automate various tasks and enter our daily life? This paper proposes intellectual stewardship as such a framework. Intellectual stewardship positions learners and educators as wise and responsible governors of intellectual processes performed by human, AI, or hybrid systems. In this context, human minds need to learn how to make wise discernment and judgment about meta-level issues, such as what intellectual work is needed in a dynamically changing context, how such work should be carried out through human and machine-supported cognition, toward what purposes, with what kind of risks. Education must respond to nurture a mindset of wise stewardship among students, so they can take on higher- level responsibilities for constructing more adaptive learning practices and trajectories in a rapidly changing world. Below I first review the related literature, then introduce the concept of intellectual stewardship, and propose five core principles. The principles lay out epistemic stances and decision points that direct distributed intellectual processes and resources for creative knowledge advancement. What New Opportunities and Challenges Are Emerging? Scholars recognize the potential use of GenAI to augment learners’ creative knowledge work: to explore complex problems, test ideas, and engage in sustained inquiry that would otherwise be difficult to implement (Markauskaite et al., 2022; Siemens et al., 2022). GenAI may act as a conversational partner, critique generator, or resource explorer in collaborative learning environments. A line of research builds on the Knowledge Building theory and pedagogy, which conceptualizes learning as integral to personal and collective efforts to advance the community’s knowledge (Chan & van Aalst, 2018; Scardamalia & Bereiter, 2014). GenAI may help students generate candidate explanations, explore alternative framings, compare and synthesize sources, enrich collaborative discourse moves, and support progressive questioning that sustains ever- deeper inquiry (Ba et al., 2025; Bereiter & Scardamalia, 2016, 2025; Feng, 2025; Lee et al., 2023). For example, Chen, Zhu, and Díaz del Castillo (2023) examined how students used ChatGPT across different stages of a knowledge-building inquiry. The findings identified productive patterns of student–ChatGPT interaction: to support problem definition, idea generation and evaluation, critical discourse, and conceptual rise-above. The teacher and students worked together to manage AI’s limitations through fact-checking outputs and refining prompts. Alongside these opportunities, the literature documents significant risks and challenges (Burns et al., 2026; Kasneci et al., 2023). First, cognitive offloading has emerged as a serious concern. While offloading some of the peripheral tasks to AI may help improve learning, unreflective use of AI may shortcut personal thinking and reduce mental effort necessary for deep learning and understanding (Shen & Tamkin, 2026). Kosmyna and colleagues (2025) report 3 behavioral and neurocognitive evidence suggesting that heavy reliance on LLMs can be associated with reduced deep semantic encoding and weakened ownership over written work—a phenomenon they characterize as “cognitive debt.” A meta-analysis echoes this risk, showing that while GenAI can improve student performance in learning tasks, such improvement may not translate into real, lasting learning gains (Deng et al., 2025). Second, researchers identify risks of weakening student agency and responsibility. GenAI introduces a symbiotic learning partnership that can either strengthen or erode human epistemic agency, depending on learners’ epistemic stances (Wu et al., 2025). Many young learners are prone to accepting AI-generated contributions without sufficient evaluation or refinement. Research shows that while GenAI use can enhance student idea generation and task completion, overreliance on AI systems may reduce student metacognitive effort, weaken their critical thinking and ownership over ideas, and foster dominant thinking patterns at the expense of idea diversity (Choi et al., 2025; Fan et al., 2025; Gerlich, 2025; Habib et al., 2024). Third, GenAI also raises ethical and civic risks. By enabling the rapid production of persuasive text and large volumes of content, AI amplifies the risks of misinformation, bias, and misuse of AI that creates harm. AI-generated, unverified contents may pollute the knowledge ecology, threatening academic integrity, social trust, and student safety and well-being (Burns et al., 2026). AI ethics guidelines are introduced to improve governance, transparency, and accountability (European Commission, 2024; UNESCO, 2021). However, in education, ethics cannot be treated as an external add-on. Human judgment and responsibility must be embedded directly in the social practices, norms, roles, and tools through which learning and teaching take place. Searching for a Conceptual Framing Educators face a critical crossroads as they determine how to move forward amid the potential benefits, risks, and uncertainties associated with AI. Responses of educators vary widely, from enthusiastic hype to AI fear (Zhai & Krajcik, 2024). So far, education research has focused on developing practical strategies to guide AI use, including explicit teaching of prompt design and critical evaluation of AI outputs. Increasing attention has been devoted to students’ AI literacy: “the knowledge and skills that enable people to critically understand, evaluate, and use AI systems and tools to safely and effectively participate in an increasingly digital world” (Mills et al., 2024, p. 4). These efforts provide a wide range of strategies focusing on how to use AI tools yet without systematic pedagogical guidance. What is missing is a coherent framework that situates specific intelligent tools and usages in a broader understanding of what/how people learn in the age of AI. Learning scientists must respond to develop theory-informed guidance that help educators make sense of emerging changes and challenges and envision new possibilities for human learning (Markauskaite et al., 2022). This paper intends to address this need, building on the literature on human learning, intelligence, and AI research that highlights human–AI partnerships and human-centered AI. The idea of cognitive partnerships between humans and technologies has long been foundational in computing research (Engelbart, 1963). With the rise of GenAI, scholars have extended this line of thinking to examine how human–AI partnerships may reshape learning and intellectual activity (Siemens et al., 2022; Markauskaite et al., 2022). Human–AI partnerships refer to the systemic arrangements through which intellectual work is distributed and coordinated across humans and artificial agents in an authentic context of knowledge practices. Intelligence emerges 4 from dynamic interactions between the human minds (i.e., internal cognitive structures and processes) and the people, knowledge artefacts, technologies, cultural practices in the larger (wild) world (Hutchins, 1995; Pea, 1993; Perkins et al., 2000). This concept is valuable for analyzing how humans and AI collaborate to accomplish intellectual activities and generate synergistic outcomes. For example, in knowledge co-creation, AI may function as a subcontractor of specific tasks, critic of human-generated works, or teammate for co-thinking, with each role implying different allocations of cognitive initiative, control, and responsibility (Lin & Riedl, 2023). Complementing this perspective, human-centered AI (HCAI) emphasizes designing and using AI systems in ways that augment human capacities while maintaining human control, reflecting human values and ethics, and improving human well-being (Schmager, Pappas, & Vassilakopoulou, 2025; Stanford HAI, 2022). HCAI seeks to amplify human potential while safeguarding human agency in an era of rapidly advancing computational power. This principle is particularly important for education. In a world shaped by uncertainty and technological change, “it is more important than ever for children from diverse backgrounds to have opportunities to develop the most human of their abilities—the abilities to think creatively, engage empathetically, and work collaboratively—so that they can deal creatively, thoughtfully, and collectively with the challenges of a complex, fast-changing world” (Resnick, 2024, p.2). Accordingly, the design of human–AI partnerships for learning must foster student epistemic agency: their personal and collective capacity to actively shape the knowledge- building work of their community for consequential outcomes, including knowledge goals, processes, tools, and collaborative roles (Damşa et al., 2010; Miller et al., 2018; Scardamalia, 2002; Wu et al., 2005; Zhang et al., 2022). Human-AI collaboration becomes part of the design for supporting student shared regulation of learning (Järvelä, Nguyen, & Hadwin, 2023). AI- supported learning environments therefore need to prompt learners to exercise reflective judgment—questioning, interpreting, and refining AI contributions—rather than passively accepting AI outputs and recommendations (Chen, 2025). Building on these concepts, this paper intends to develop a more nuanced framework to explain how human minds and responsibilities should re-adapt as AI continues to augment or automate various cognitive tasks. Particularly, such a framework needs to account for how learners and educators enact epistemic agency and judgment to co-construct more adaptive and creative learning practices in an AI-infused, rapidly changing world (Pendleton-Jullian & Brown, 2018; Tao & Zhang, 2021; Zhang et al., 2022). As Penuel and colleagues (2025) emphasize, to design and use AI for collaborative knowledge building, students and teachers should be involved to offer input to critical issues about their work, such as how intellectual processes— human, artificial, or hybrid—should be integrated, for what purpose, for whom. The framework of intellectual stewardship is proposed to address this need. Intellectual Stewardship The tradition of wise stewardship provides a metaphor to understand human responsibilities and mindsets for productive knowledge work in the age of AI. A steward is one who is entrusted with resources that do not ultimately belong to them and is therefore responsible for managing those resources diligently, wisely, and fruitfully. In the case of managing a household or organization, a steward does not merely maintain daily operations but exercises judgment over how resources are allocated, coordinated, and developed so that the whole 5 enterprise flourishes and supports the well-being of its members. Faithful and wise stewardship involves trustworthiness and accountability, discerning present needs and future challenges, carrying out responsibilities with integrity and care, and balancing efficiency with long-term well-being. Fruitful stewardship goes beyond preservation to growth, seeking to multiply what has been entrusted through thoughtful planning, sustained effort, and responsible risk-taking. A steward thus acts neither as a passive caretaker nor as an unchecked owner, but as a responsible governor who aligns daily decisions with overarching purposes, serving the good of the whole and honoring the trust placed in them. A survey shows that productive knowledge workers devote intentional efforts to AI stewardship: turning intentions into queries, shaping AI responses, and assessing if the AI response meets their needs, while taking responsibility for the work accomplished (Lee et al., 2025). In a broader sense beyond AI use per se, this paper considers how educators and students practice wise, faithful, and fruitful stewardship of education as a social and intellectual enterprise in the age of AI. I call this practice “intellectual stewardship.” Intellectual stewardship refers to the human practice and responsibility of wisely governing intellectual activity—human, artificial, and hybrid—in a dynamic context, by judging what knowledge work should be undertaken, how it should be pursued, using what tools and resources, and toward what personal and social purposes. Compared with human–AI partnerships, the stewardship metaphor situates human use of and interaction with AI in the larger context of knowledge practices, foregrounding human epistemic responsibility and ethical judgment in harnessing intellectual power for advancing knowledge, achieving educational goals, and growing social good. In education systems, educators function as intellectual stewards entrusted with children to nurture; opportunities, spaces, and authority to teach; professional knowledge and talents to apply; and knowledge infrastructures and AI resources to access and orchestrate. They bear responsibility for creating inclusive and supportive learning environments for all learners and for educating a new generation of learners, citizens and workers capable of contributing to a knowledge-based, rapidly changing society in the age of AI. Students, in turn, are intellectual stewards in apprenticeship. They are entrusted with opportunities and spaces to learn, teachers, peers and other partners to work with, rich knowledge resources to explore and build upon, as well as AI and other technologies to use. As they develop their own knowledge and capabilities, students also assume shared responsibility for using educational resources and intellectual assets productively—to advance the knowledge and well-being of their learning community and to contribute to broader communities whenever possible. While teachers are entrusted with greater authority and responsibility for education, students contribute to the co-organization and continual improvement of their knowledge- building work. This includes participating in high-level decisions to frame what they need to inquire into, why particular problems matter, and how collaborative inquiry should be organized and carried out over time (Zhang et al., 2018, 2022). Faithful and fruitful stewardship therefore demands high-level, thinking-intensive efforts, no less than ownership. As AI catalyzes deeper changes in school curriculum, assessment, and management, students may be given greater flexibility and opportunity to orchestrate learning resources and activities for pursuing personal interests and creative goals (Penuel et al., 2025). For both students and educators, an essential and limited resource under their stewardship is time. AI may significantly reshape how time is distributed. As some routine cognitive work is distributed to or shared with AI agents, students and educators should not become passive or think less. Instead, gains in efficiency create opportunities to reinvest time and intellectual 6 resources in higher-value activities: tackling more complex problems, pursuing new directions of inquiry, building personal passions and character strengths, and assuming greater responsibility for advancing the knowledge and well-being of their learning communities and beyond. A key challenge for teachers and students is to learn to make such intentional reinvestment as AI is incorporated, so they can continually adapt and surpass themselves in a changing environment (Bereiter & Scardamalia, 1993; Schwartz, 2025). How Do Students Enact Intellectual Stewardship? This section further unpacks how students enact intellectual stewardship as they conduct knowledge-building work with AI. In essence, students steward two core forms of intellectual assets: (a) content knowledge across disciplinary domains as related to personal experiences, and (b) intelligent capabilities and tools for processing and advancing knowledge. Both are substantively augmented by AI technologies. I propose five principles for wise and fruitful intellectual stewardship of intellectual assets and practices in AI-infused environments (see Figure 1). Students, as intellectual stewards, must learn to be: (1) Knowledge-wise: Understanding the evolving state of knowledge and taking fruitful actions to continually advance it with AI support; (2) Intelligence-wise: Making wise choices to orchestrate distributed intellectual processes and resources; (3) Context-wise: Recognizing opportunities and risks in a changing environment to develop creative contributions and responsive actions; (4) Ethics-wise: Exercising ethical discernment, responsibility and care in the use of knowledge and intellectual power; and (5) Devoted to self- and community-growing: Continually growing and surpassing themselves while advancing the collective good of the broader communities. Figure 1. The five principles of intellectual stewardship as a framework for governing AI- mediated knowledge work. KNOWLEDGE-WISE Understanding the status of knowledge & taking fruitful actions INTELLIGENCE-WISE Orchestrating distributed intellectual processes and resources ETHICS-WISE Exercising moral judgment, accountability, and care CONTEXT-WISE Recognizing opportunities and risks for creative contribution in a changing environment 7 The five principles highlight productive features of intellectual stewardship, which include four facets of decision-making (being knowledge-wise, intelligence-wise, context-wise, and ethics-wise) geared toward human-centered purposes (self- and community-growing). The principles provide a compass for navigating complex learning and knowledge work in AI-infused environments. I unpack each principle below, including what it means, why important, and the meta-level mindful stances and judgment points for wise stewardship. Knowledge-Wise Beyond learning to be more knowledgeable, students in AI-infused environments must develop a knowledge-wise mind: a mindset that can exercise reflective judgment about the knowledge they engage with—its epistemic status, explanatory power, limitations, gaps, and future potentials—in order to act strategically to improve it and put it to productive use. Under their stewardship, students work with both personal knowledge and collective knowledge. Personal knowledge refers to their formal knowledge gained through education and informal experiences accumulated in life. Collective knowledge is what Popper (1972) termed World 3, the realm of objective knowledge artifacts such as theories, explanations, arguments, designs, and models that exist beyond individual minds and are open to public critique and improvement. Frontline research reframes students’ relationship to collective knowledge: they are not only knowledge users or recipients, but also knowledge builders who participate in the advancement of collective knowledge (Paavola & Hakkarainen, 2005; Scardamalia & Bereiter, 2014). As knowledge-wise stewards, they need to develop a reflective sense of the evolving ideas in their own learning community, tap in the knowledge of the larger fields, and seize on opportunities to make creative contributions. Such knowledge-wise stewardship becomes essential for students to use AI to advance knowledge while mitigating its epistemic risks. Trained on vast corpora of collective knowledge, GenAI systems are reshaping the ecology of collective knowledge itself, changing how people encounter, generate, synthesize, and circulate knowledge. This has both positive and negative implications. On one hand, GenAI may widen access to World 3 by helping students interpret complex texts, compare different perspectives, work across different modalities and media, and bridge linguistic barriers, opening opportunities for students to participate in knowledge-creating discourse (Feng, 2025; Scardamalia & Bereiter, 2020; Vokatis et al., 2026). At the same time, AI amplifies epistemic risks. Easy access to AI outputs may encourage surface-level engagement, weaken learners’ responsibility for deep thinking and judgment, and accelerate the spread of underdeveloped, biased, or misleading knowledge artifacts (Hou et al., 2026; Kosmyna et al., 2025). The knowledge-wise orientation is essential to guiding students’ reflective judgment and action in real contexts, so they can capitalize on the affordances of AI while addressing the potential risks in a timely and responsible manner. Central to this reflective orientation is students’ epistemic cognition and meta-knowledge. Epistemic cognition refers to peoples’ beliefs, theories (mental models), and dispositions related to knowledge and knowing, such as the aim and process of knowledge work and criteria of success (Barzilai & Chinn, 2018; Chinn & Sandoval, 2018). As a key part of epistemic cognition, meta-knowledge refers to knowledge about the state of knowledge. It helps learners to “see the forest through the trees” as they navigate their unfolding paths of knowledge building, situating specific works and ideas in a larger problem space that matters to them (Scardamalia & Bereiter, 2020, 2025). Epistemic cognition and meta-knowledge function as a form of 8 intellectual governance in AI-infused environments, guiding student reflection on key issues about their knowledge work. Example decision points include: which problems are worth pursuing, what information is relevant, which ideas are promising to develop (or dead-ends to avoid), what/how AI systems should be used to advance the ideas, whether AI-generated outputs advance the understanding of the central problems or merely add rhetorical refinement, what problems have been addressed, what remains unclear, and how to move forward. Knowledge builders take high-level responsibility to make sense of the landscape of the knowledge in their community and position AI’s role in this context. As they reflect on their knowledge-building process with AI, learners may frame/reframe problems, prioritize inquiry directions, coordinate personal and collaborative efforts, and take responsibility for the trajectory and integrity of knowledge advancement. AI outputs themselves become objects of inquiry subject to evaluation, revision, and continual improvement. As a key epistemic marker of knowledge-wise stewardship, students need to adopt an expansive, agentic stance to continually build on what they know and push beyond: asking deeper questions, identifying gaps, connecting ideas and inquiries across contexts, and launching new lines of inquiry driven by a sense of wonderment and love to know (Engle et al., 2012; Vokatis et al., 2026; Wu et al., 2025; Zhang et al., 2022). Over time, such intentional efforts may help students develop a sustained inquiry trajectory to develop increasingly deeper understandings, better solutions, and higher-level creative expertise (Bereiter & Scardamalia, 1993; Sawyer, 2015; Schwartz, 2025). Intelligence-Wise To work with AI productively, students not only need solid intelligent (thinking) skills but also learn to be intelligence-wise, that is, knowing how to use their capabilities and skills wisely in concert with AI and other resources for productive outcomes. This orientation guides how learners work with their personal and distributed intelligent capabilities under their stewardship. Personal abilities include diverse ways of thinking—analytical, creative, and practical (Sternberg et al., 2023)—that help learners to process knowledge, solve problems, and make sound decisions. Their personal abilities are further extended by distributed intelligence systems, which involve interacting with other people, tools, representations, and social practices in authentic contexts (Hutchins, 1995; Pea, 1993). As AI reshapes the distributed cognitive systems, education must cultivate human minds that know how to govern different intelligent capacities and processes, including how different human thinking approaches and AI-based operations are selectively used and synergized to achieve productive and ethical purposes. Intelligence-wise stewardship is critical to AI-supported knowledge work. Rather than merely assisting human cognition, GenAI increasingly reorganizes how cognitive work is allocated and coordinated across human and machine systems. Serious risks come with this shift: cognitive offloading without understanding, weakened epistemic agency, and misalignment between task-level performance and deep learning (Deng et al., 2025; Grinschgl & Neubauer, 2022; Kosmyna et al., 2025; Riedl et al., 2025; Wu et al., 2025). To mitigate such risks, it is essential for learners to exercise reflective judgment over when AI is used and how the distributed cognitive processes are coordinated and constrained. Intelligence-wise learners engage in meta-level thinking about human and AI contributions should be leveraged and orchestrated to achieve meaningful goals. Key decision points address issues such as: what kind of intellectual work is needed in a context, for what 9 purposes, how should it be carried out, by/with whom, using what intellectual resources and tools, and what counts as good work or success? By engaging in reflective judgment and regulation of these issues, students build reflective framing of their inquiry-based work, which guides personal and collective efforts to advance their knowledge (Zhang et al., 2018, 2022). A core mental function enabling intelligence-wise stewardship is meta-intelligence: a distinctly human capacity for governing how different intellectual processes and resources are mobilized for specific purposes and contexts. The concept of meta-intelligence was originally proposed in relation to individual problem solving (Sternberg et al., 2023), defined as higher- order processes that select, control, and coordinate analytical, creative, practical, and wisdom- based approaches in a problem situation. These processes include recognizing and framing problems, allocating mental resources, monitoring problem-solving strategies, and evaluating outcomes. In AI-infused environments, the concept of meta-intelligence needs to be expanded beyond personal thinking. Meta-intelligence serves as a high-order governing ability over distributed human–AI systems, enabling learners to frame intellectual work in context, orchestrate cognitive processes across people and machines, align intellectual efforts with values and purposes, and assume responsibility for the consequences of their intellectual work. As key markers of meta-intelligence, education needs to nurture productive dispositions toward thinking with AI. Research on the dispositional view of intelligence highlights a constellation of stances (habits of mind) that shape how learners engage with intellectual work, potentially applicable to working with AI (Lucas, 2016; Perkins et al., 1993; Wu et al., 2025). Inquisitive and persistent dispositions may orient students toward identifying valuable problems and sustaining effort to achieve deep understanding that goes beyond AI-generated first responses. Open-minded yet disciplined dispositions support the imaginative exploration of possibilities surfaced through AI interaction while maintaining standards of evidence, coherence, and logical reasoning. Collaborative dispositions enable learners to engage in productive dialogue with both human peers and AI partners—generating diverse ideas for creative exploration, examining multiple perspectives on shared problems, questioning underlying assumptions, connecting and synthesizing ideas across sources, and monitoring the progress of inquiry while identifying new problems to pursue (Bereiter & Scardamalia, 2025; Chen et al., 2023; Lee et al., 2023). Students and teachers need to be mindful about the limitations and fallibility of both human and artificial intelligence, so results and ideas are critiqued, risks are mitigated, and problems are addressed. Intelligence-wise and knowledge-wise orientations address complementary dimensions of human responsibility: the former governs how conceptual thoughts and ideas are developed, evaluated, and advanced, while the latter governs how the thinking processes are organized, resourced, and adapted. The integration of both guides the wise use of intellectual power for advancing deep understanding and inquiry, focusing on issues that are meaningful to students. Context-Wise Context matters when it comes to how knowledge and intelligence are used and advanced. In AI-infused environments, learners need to be context-wise, capable of positioning their intellectual activities in relation to an evolving situation in which such activities unfold. They need to immerse in authentic settings of creative work and human experience, recognizing emerging needs and opportunities to take intellectual initiatives in connection with the ideas and works of other people as well as information generated by AI systems. 10 Context-wise stewardship is critical for human learners, thinkers and knowledge builders in the age of AI. A core limitation of AI systems is their lack of lived, embodied, and culturally situated experience; they cannot fully grasp the significance of unfolding events and social meanings, or moral stakes embedded in real-world contexts (Bereiter & Scardamalia, 2025). In response, context-wise stewardship foregrounds human responsibility for interpreting situations and governing when, how, and why AI-mediated intellectual work should proceed. It shifts attention from optimizing responses within a predefined task frame to judging which problems and directions are worth pursuing in the first place. By emphasizing human judgment over the meaning and significance of context, context-wise stewardship extends beyond context-aware AI design, which focuses on embedding learner characteristics into systems to improve the responsiveness of AI systems (Liu et al., 2025). This orientation builds on the dispositional and situated views of intelligence and creativity (Pendleton-Jullian & Brown, 2018; Perkins et al., 1993, 2000; Sawyer, 2015), which emphasize sensitivity to opportunity as a core component of dynamic intelligent action. The world shaped by AI is characterized by rapid change, ambiguity, and interdependence. Traditional models of education—organized around stable domains, predefined learning pathways, and predictable outcomes—are increasingly inadequate. Instead, learners must develop the capacity to navigate dynamic flows of knowledge, adapting inquiry-based actions as conditions shift. Pendleton-Jullian and Brown (2018) capture this challenge through the metaphor of whitewater kayaking. In a turbulent river, progress depends not on following a fixed route but on skillfully reading currents, sensing disturbances, and responding in the moment. Similarly, in AI-infused knowledge environments, students must learn to engage authentically in context, attune themselves to emerging problems and opportunities, connect with people holding diverse interests and expertise, and take responsive efforts to develop productive paths of participation. Fruitful inquiry often arises from unanticipated encounters with diverse interests and ideas. Our studies of opportunistic collaboration in Knowledge Building classrooms (Grades 4 and 5) provide concrete illustrations of such dynamic learning (Zhang et al., 2009, 2022). Rather than working in fixed groups with prescribed roles, students formed and re-formed inquiry groups to address emergent needs in response to the evolving interests, questions and ideas in their classroom community. Across face-to-face interaction and online discourse, students continually built on one another’s questions, theories, and observations, incorporated new resources and tools, and generated spontaneous inquiry activities to advance collective understanding. These cases demonstrate how young learners can read emerging currents of ideas, sense inquiry needs and directions, and improvise personal and collaborative work in changing environments. Therefore, to cultivate context-wise stewardship, schools need to create authentic learning environments that support spontaneous participation and interaction. Learners attend to tensions and advances in collaborative dialogues, identify emerging knowledge connections and gaps, and connect their own inquiry to broader disciplinary research and social developments. They recognize unexpected opportunities: new questions sparked by a peer’s idea, unforeseen connections across domains surfaced through human or AI interaction, or opportunities to grow deeper ideas emerging from exploratory work. As progress is made with existing problems, they intentionally look for new challenges to solve and new peaks to climb. Similarly, teachers need to increase flexibility and responsiveness in their teaching and engage in idea-centered noticing as student inquiry unfolds: to see student ideas under construction, envision opportunities for deeper thinking, and responding in ways that help students grow ideas and inquiries together 11 (Park & Zhang, 2025). AI-powered analytics may help students and teachers navigate dynamic flows of ideas and conversations in collaborative environments, pointing forward to emerging directions and opportunities for deeper inquiry (Chen & Zhang, 2016). Students bear responsibility for deciding where to direct attention, when to reframe or expand inquiry, and how to reorganize their personal work and team collaboration over time. Ethics-Wise Such stewardship of AI-infused environments calls for an ethics-wise mind: capable of reflecting on their ethical and moral responsibilities to others, to the knowledge and well-being of their learning community, and to the broader communities and society in which their intellectual work is embedded. Ethical issues become especially salient when working with AI, which scales intellectual power while weakening the processes that ordinarily slow down our thinking, invite verification, and reinforce personal responsibility. This combination increases the likelihood that errors, bias, misrepresentation, and even harmful manipulations (e.g., of personal images) circulate through classrooms and public spaces. Intellectual stewards must exercise ethical judgment, care, and accountability. Research on AI literacy frameworks and policies provides guidelines for users to make responsible use and critical evaluation of AI, attending to risks such as misrepresentation, bias, erosion of responsibility, and harm to individuals, communities, and knowledge systems (Mills et al., 2024; UNESCO, 2021). In practice, educational institutions are also creating their own AI policies and rules. Yet ethical stewardship extends beyond rule-compliant user behaviors to an ethical mind, which is intrinsic to creative knowledge work itself, with or without AI. AI technologies are rapidly evolving, so are the ethical landscapes around AI use. This makes it almost impossible to develop comprehensive rules and guidelines. For this reason, schools need to shift their focus to cultivating a principle-based, ethical mindset among students and educators to guide their discernment in specific contexts. Drawing on Gardner’s (2006) framing, the ethical mind operates at a reflective level, guiding how individuals understand and enact their roles as learners, workers, professionals, and citizens. When working with AI, ethics-wise minds do not ask only whether something can be done efficiently with AI, but whether it should be done, with what potential benefits and risks to themselves and to others, and how to safeguard with care. At the core of the ethical mind is a commitment to “good work,” which integrates intellectual/technical excellence, personal meaning, and ethical integrity and responsibility (Gardner, 2006). Good work for learning and knowledge building is not only about productivity; it needs to be ethical in the first place. Therefore, a steward with an ethical mind would strive to advance good work that benefits personal and collective good while addressing moral failures with responsibility and integrity. GenAI can rapidly produce authoritative-sounding explanations and claims, increasing the risk that underdeveloped, biased, or misleading ideas enter shared spaces. Ethics-wise learners are therefore mindful of the source and state of knowledge, treating AI-generated outputs as provisional inputs, subject to scrutiny, revision, and collective evaluation. Members of the classroom community shoulder collective responsibility for advancing and gatekeeping their collective knowledge (Scardamalia, 2002; Zhang et al., 2009): maintaining the integrity of knowledge-building processes, disagreeing respectfully, recognizing areas where evidence is weak, resisting premature conclusions, and limiting AI use when it threatens student agency or well-being. Therefore, ethical stewardship of AI is not solely an individual responsibility. Ethical risks are socio-technical, arising from data practices, model behavior, institutional incentives 12 (efficiency, performance pressure), and governance gaps. Educational systems therefore need to work with other partners to develop system-level stewardship, including ethics-by-design approaches embedded in educational tools and norms, transparency and traceability mechanisms, and risk-based governance that anticipates, monitors, and mitigates harm (European Commission, 2024). Self- and Community-Growing The preceding four features—being knowledge-wise, intelligence-wise, context-wise, and ethics-wise—guide how learners and educators govern intellectual assets in AI-infused environments. The final principle highlights the purpose of such efforts: for self- and community-growing. While AI systems can optimize task execution, they cannot choose purposes or sustain commitment for long-term goals and missions. It is a fundamental human responsibility to make sure that the use of AI serves meaningful goals and is not misdirected. From an educational standpoint, the core purpose of intellectual stewardship is twofold: to support self-growing and community-growing. Students’ work with AI is therefore oriented not toward knowledge productivity alone, but toward developing essential human capacities, addressing authentic problems, building shared understandings, and contributing to the public good. With self-growing as a guiding aim, students and educators attend to whether their use of AI contributes to their personal development in near and longer terms. Students do not simply consider how AI can help them produce better assignments or products, but how the process can help them become better writers, thinkers, and learners. Likewise, teachers do not merely use AI to generate better lesson plans; they strive to grow as reflective designers and practitioners who deepen their own understanding of content, attend to students’ diverse ideas and needs, and respond strategically to invite deeper thinking and create more meaningful learning experiences. Over time, self-growing stewardship becomes a form of identity development: the ongoing process through which learners negotiate who they are and who they are becoming in relation to knowledge, practice, and community (Carlone & Mercier, 2026). AI tools may further support this process by enabling perspective-taking, storytelling, and simulated participation in real- world social roles. Through sustained participation in knowledge-building practices, learners come to see themselves as contributors, problem solvers, and collaborative members of communities. Central to their self-growth, students need to develop deeper self-knowledge. Gardner’s (2000) notion of intrapersonal intelligence captures this capacity for self-awareness and regulation, enabling learners to set goals, manage emotions, and align actions with long-term purposes. Research on growth mindset, grit, and perseverance underscores the importance of sustaining commitment to meaningful goals in the face of challenge and uncertainty (Dweck et al., 2014; Duckworth et al., 2007). In AI-infused environments, where answers are readily available and effort can be easily bypassed, such human dispositions and characters become even more valuable for resisting short-term gratification and pursuing deep learning. Learners must cultivate a deeper understanding of themselves as they interact with AI and collaborate with others: What do I value and care about? What am I good at, and how can I further develop these strengths? What are my limitations and fallibilities, and how should I manage or address them? With a clearer sense of self and long-term purpose, learners can put AI to work for developing sustained learning trajectories and building adaptive, creative expertise (Barron, 2006; Bereiter 13 & Scardamalia, 1993; Schwartz, 2025). As AI helping reduce the time and effort required for routine cognitive tasks, students may reinvest their resources in addressing greater complexity: working with richer representations, revisiting enduring problems at higher levels, identifying new or deeper issues, and launching new directions of learning (see Bereiter & Scardamalia, 1993). With community-growing as a related aim, students consider how their knowledge work with AI advances shared understanding, addresses authentic challenges, and enhances collective good and well-being. In doing so, they govern not only knowledge and ideas but also the social, epistemic, and emotional conditions under which hybrid human–AI knowledge work unfolds. From a Knowledge Building perspective, learning is not merely an individual achievement but a contribution to the advancement of community knowledge—knowledge that is shared, improvable, and oriented toward the public good (Scardamalia & Bereiter, 2014, 2020). Community-growing stewardship thus positions students as responsible contributors to the intellectual and social capital of their learning communities and beyond. Within their immediate learning communities, students contribute to collective understanding, discourse, inquiry practices, and socio-emotional well-being. They may use AI tools to synthesize collective progress, identify connections among ideas, evaluate community knowledge in relation to peer or global knowledge, highlight promising inquiry directions, or surface social and emotional needs that warrant attention (Bereiter & Scardamalia, 2025; Chen et al., 2023; Feng, 2025). Beyond their own classrooms, students contribute knowledge to address the needs of broader communities and fields. AI-enhanced designs may support students’ sharing of productive knowledge advances across classrooms, enabling ideas to travel, connect, and improve through broader discourse networks (Yuan, Zhang, & Chen, 2022; Zhang et al., 2020). Digital platforms may be designed to support youth engagement in community-based research, such as using public data to investigate locally meaningful issues (Harris et al., 2020; Magnussen & Hod, 2023). In our undergoing design experiment, students collaborate with external organizations to address authentic challenges posed by community partners (Underwood et al., 2025). Students generate guiding questions, collect and analyze data, and refine ideas of value to their partners’ problem solving. AI tools can further extend these possibilities by supporting data analysis, synthesis across sources, and communication with diverse audiences. Across both levels of participation, a key practice for students’ community-growing stewardship is metadiscourse: meta-talk about their ongoing discourse to reflect on shared goals, collaboration roles, group norms, progress, and needs. Metadiscourse provides a means for students to engage in socially shared regulation of cognitive and socioemotional processes in collaborative learning settings (Hadwin, Järvelä, & Miller, 2018). Students monitor the flow of their conversation; reflect on their cognitive progress, promising directions, and socioemotional experiences; and make collaborative decisions about how to move their collaborative work forward (see analyses in Vokatis et al., 2026; Zhang et al., 2018). AI-supported analytics and learning designs may help students navigate metadiscourse: to reflect on the promisingness of ideas and inquiry directions, trace collective knowledge progress and socioemotional dynamics, identify productive patterns and norms, generate rise-above artifacts that point ahead to new problems or directions, or connect students’ inquiry/discourse with the work of peer classrooms or the broader communities (Bereiter & Scardamalia, 2025; Chen et al., 2015; Zhang et al., 2018; Yuan et al., 2022; Yang et al., 2024; Zhu et al., 2022). 14 Self-growing and community-growing are mutually reinforcing. As learners contribute to collective knowledge and well-being, they develop the dispositions, identities, and commitments needed for personal lifelong growth. Conversely, self-growth equips learners for advancing their community’s knowledge and well-being. Members with different cultural, linguistic, and knowledge backgrounds contribute diverse ideas to a collaborative environment. AI systems need to be carefully designed to support such idea diversity instead of only reinforcing dominant voices that match with training data. Concluding Thoughts The framework of intellectual stewardship offers a human-centered, conceptually grounded approach to advancing creative learning with AI, building on theories of human learning, intelligence and knowledge creation. Such an approach is particularly needed for exploring creative futures of learning and education amid significant opportunities and risks brought by AI. Students and educators are positioned as governors of intellectual capacities and practices, who are responsible for making wise and fruitful choices to co-construct creative knowledge practices performed by human and AI systems to continually advance knowledge in a way that benefits their personal growth and collective good. Conceptual Implications The principles of intellectual stewardship provide a nuanced lens for viewing the changing functions of human minds in AI-infused knowledge environments, foregrounding epistemic agency and reflective judgment to co-construct adaptive learning processes and knowledge practices in an AI-infused world (Pendleton-Jullian & Brown, 2018; Tao & Zhang, 2021; Zhang et al., 2022). Each principle highlights meta-level stances and capacities that are essential to the functioning of human minds to approach the conceptual, epistemic, social/contextual, and ethical issues of knowledge work with AI. Students enact high-level epistemic agency and responsibility by acting on agentic stances and dispositions, as shown through how they approach the critical decision points aligned with different facets of intellectual stewardship. • Knowledge-wise stewardship emphasizes the essential role of epistemic understanding and meta-knowledge in AI environments: treating knowledge as improvable, engaging in sustained inquiry, and taking responsibility for advancing the coherence, explanatory power, and public value of ideas—whether generated by humans, AI, or hybrid systems. • Intelligence-wise stewardship guides how thinking is organized and distributed, highlighting the need of meta-intelligence to govern the use of intellectual processes and tools, so that human judgment, agency, and metacognitive control remain central. • The context-wise mind embraces sensitivity to place, time, people, and evolving conditions, enabling learners to discern which problems are worth pursuing, how contributions should be situated, and when to adapt or redirect intellectual effort in response to evolving changes and opportunities. 15 • Ethics-wise stewardship foregrounds responsibility for the consequences of knowledge work, emphasizing the need to nurture ethical minds, caring hearts and social norms in contexts where intellectual power can be potentially misused. • Finally, self- and community-growing give all the stewardship practices a meaningful purpose and commitment, ensuring that knowledge work serves personal development, collective well-being, and the public good rather than short- term efficiency and task completion alone. As a whole, the meta-level dispositions and capabilities enlightened by the principles depict a portrait of wisdom-oriented social minds that work adaptively to navigate collaborative knowledge practices with AI for meaningful purposes. These mindsets and practices are characteristic of wise, responsible, and fruitful stewards of intellectual assets and practices. As AI assists various cognitive processes, human minds need to adapt to develop such meta-level dispositions and capabilities that help make wise judgments and responsible choices about core issues of thinking, learning, and knowledge work. In a collaborative community, such dispositions are cultivated as both personal habits of mind and socially shared cultural norms, guiding members’ reflective judgments, participatory roles, interactional processes in their knowledge-building work distributed across human and AI systems. A core conjecture is that different stances and choices toward AI use do not merely affect student performance in particular learning tasks. These stances shape personal and shared knowledge practices in which AI is embedded, leading to different participatory pathways during the activities and different developmental trajectories in the long term. In light of the principles and related studies, it is likely that students who adopt productive stances toward knowledge and intellectual power will continually create new opportunities for deep thinking, expansive learning and collaborative knowledge building (Chen et al., 2023; Hou et al., 2026; Feng et al., 2025). As knowledge-wise participants, they treat AI outputs as provisional contributions—objects for examination, revision, and further inquiry—rather than finished answers. As intelligence-wise actors, they choose thought-demanding engagement over cognitive shortcuts, deliberately regulating when AI comes in to extend their thinking and when deeper personal effort is required. As context- wise learners, they situate their work in relation to authentic problems and evolving knowledge needs, reading the broader intellectual and social landscape rather than focusing narrowly on assignment completion. As ethics-wise learners, they take responsibility for advancing collective good through their knowledge work, attending to issues of accuracy, integrity, fairness, and safety. And with self- and community-growing as their guiding purposes, they reinvest efficiency gains into deeper exploration, greater complexity, and more meaningful contributions. Over time, this will position students on self-sustained, ever-expanding learning trajectories, supporting the development of adaptive expertise, epistemic agency, and personal identities grounded in responsibility and contribution to the public good (Carlone & Mercier, 2026; Engle et al., 2012; Scardamalia & Bereiter, 2014; Schwartz, 2025). By contrast, unproductive stewardship reduces AI use to efficiency and performance optimization. Knowledge work becomes synonymous with task completion rather than sustained idea advancement. AI is used to generate answers, polish outputs, or bypass difficulty, fostering cognitive offloading without deep understanding (Kasneci et al., 2023; Kosmyna et al., 2025; Shen & Tamkin, 2026). Over time, student intellectual engagement may decline and their epistemic agency may weaken. Learners may prioritize effort-saving over sense-making, external performance over intellectual inquiry, and superficial compliance over intentional participation 16 and contribution. In more troubling cases, AI becomes a mechanism for academic disengagement or dishonesty, undermining both personal development and collective well-being. It is paramount for future research to systematically investigate the different possibilities and design AI-infused environments to enhance productive learning trajectories for all learners. Practical Implications The framework of intellectual stewardship also provides practical guidance for reforming learning goals, educational practices, and system designs in the age of AI. As AI systems augment or automate various cognitive tasks, schools must refocus the goal of education to cultivate wise and responsible intellectual stewards: learners, workers, and citizens who are knowledge-wise, intelligence-wise, context-wise, and ethics-wise, and who purposefully seek continual self-growth and community-growing. Beyond learning how to use AI tools efficiently, learners need to engage in authentic knowledge-building environments with rich opportunities to practice wise and responsible judgment about their intellectual work, deciding when, where, how, and why AI should be used, and when it should not. The principles of intellectual stewardship offer a compass for learners and teachers to navigate their knowledge-building processes in an AI-infused environment. Specifically, the five principles highlight key reflective questions that may guide learners’ choice-making and inform teacher design and reflection. Table 1 shows a set of example questions aligned with the principles, offering practical scaffolds for learner self-regulation and teacher planning. Teachers may further turn such meta-level questions into classroom conversations to model and scaffold reflective thinking and co-organize collaborative knowledge work with AI support. Table 1: Reflective questions for learners and teachers as intellectual stewards Feature of Intellectual Stewardship Reflection points for learners Reflection points for teachers Knowledge- Wise: Understanding the status of knowledge and taking fruitful actions What am I trying to understand or achieve? Which problems are worth pursuing, and which ideas or information matter? What is the current state of our knowledge? What problems, gaps, or tensions remain? How does this idea improve on what we already know? How can we further improve or rise above our ideas (with AI or not), toward what aims? What kinds of ideas and problems are worth sustained inquiry here? How does this activity support collective idea improvement rather than task completion? How will students know whether knowledge has actually advanced? How can I support their ongoing reflection and meta-discourse? Intelligence- Wise: Orchestrating intellectual processes and resources What kinds of thinking or inquiry are needed here? How should it be carried out, by/with whom, using what intellectual resources and tools? Which human and AI contributions are appropriate? What counts as good or success? How should I balance AI automation with my own reasoning, thinking and judgment? What forms of thinking should remain primarily human, and where can AI productively contribute? How can AI be used to extend—not replace—students’ reasoning, creativity, and judgment? How will students learn to think with AI output and avoid overreliance on AI? 17 Feature of Intellectual Stewardship Reflection points for learners Reflection points for teachers Context-Wise: Situated discernment and responsiveness in a changing environment What is happening in this context that calls for intellectual engagement? Why does this work matter here and now? How should I position my learning and contribution in relation to the works and ideas of others? What real problems, contexts, or communities can anchor this work meaningfully? How can I design environments that help students read emerging needs, opportunities, and tensions? How should the design adapt as inquiry directions evolve? Ethics-Wise: Exercising moral judgment, accountability and care What responsibilities accompany my learning and work? What is right or good to do, and what is not? Should this work be done, and under what conditions? Who might be affected by these ideas or outputs? How do I ensure integrity, fairness, and accountability? What ethical risks might arise in this learning activity (e.g., authorship, bias, responsibility, integrity, misuse)? What norms, routines, or checkpoints can help students practice ethical judgment? When should AI use be constrained or slowed down? Self- & Community- Growing: Human-centered purpose, commitment, and contribution What kind of person do I want to be? How does this work help me grow as a learner, thinker, writer and person? What responsibilities do I have to my learning community and the broader society? How does my work advance our shared understanding or collective well-being? How does this learning experience support students’ long-term growth, identity development, and sense of agency? How does it strengthen the intellectual and social dynamics of the learning community? What opportunities exist for students to contribute beyond the classroom? Future Research The intellectual stewardship framework may guide future research on the development of adaptive, wisdom-oriented, social minds in the age of AI. Specifically, conceptual and empirical research needs to further understand the reflective stances and judgments associated with the five principles; trace how these personal and shared stances interact with one another to shape learners’ interactions with human and AI systems over time; and examine their short- and long- term consequences for collaborative knowledge building, epistemic agency, identity development, and adaptive expertise. Design-based research could investigate ways to scaffold human-AI interaction in knowledge-building environments and assess its impact on students’ conceptual understanding, collaborative discourse, and their evolving sense of responsibility for knowledge advancement. Insights gained through such research will inform human-centered design of AI systems that deliberately scaffold principle-based judgment, choice-making, and collective responsibility for continual knowledge advancement with AI. Beyond the existing designs of AI tutors and coaches to deliver personalized instruction of pre-defined subject contents, education needs “a truly personal approach to learning” that gives the learner more choice and control (Resnick, 2024, p.4): to set their own goals based on their interests, develop their own ideas, strategies and social connections, integrate relevant resources to address their emergent needs, and reflect on 18 their progress over time with AI support. Designs of AI-infused environments need to support such self-sustained learning: learners are deeply engaged in addressing authentic challenges using AI and other resources, monitor ongoing progress, and purposefully seek higher-level challenges to tackle as progress is made, drawing upon distributed resources. Such reflective efforts will not only make learning adaptive, productive, and self-sustaining (Barron, 2006; Bereiter & Scardamalia, 1993; Penuel et al., 2025; Resnick, 2024; Schwartz, 2025) but also enable an experience of flow that gives students the joy to learn (Csíkszentmihályi, 1990). Future work may also extend the intellectual stewardship framework to develop principle- based guidance for teachers, school leaders, policy makers, and community partners. The collaborative engagement of the different stakeholders is essential to creating a new learning culture in the age of AI, making sure that human wisdom, creativity, and responsibility for social goods continually rise alongside the expansion of technological power. In this collaborative adventure, we all need to adapt/re-adapt, as wise, responsible, and fruitful stewards. Acknowledgements The writing of this paper was partially supported through the Faculty Innovation Fellowship program of the AI & Society College of the University at Albany. I used Microsoft 365 Copilot and ChatGPT-5 in the writing process to search for related works, explore ways to phrase key concepts, and proofread the drafts to improve text clarity and formatting without changing the substance. References Ba, S., Chen, X., Zhang, Y., & Wang, H. (2025). Investigating the impact of ChatGPT-assisted feedback on the dynamics and outcomes of online inquiry-based discussion. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13490 Barron, B. (2006). Interest and self-sustained learning as catalysts of development: a learning ecology perspective. Human Development, 49, 193-224. Bastani, H., Sungu, M., Ge, H., Kabakci, A., & Mariman, R. (2025). Generative AI without guardrails can harm learning. Proceedings of the National Academy of Sciences, 122(26), e2422633122. https://doi.org/10.1073/pnas.2422633122 Bereiter, C. & Scardamalia, M. (1993). Surpassing Ourselves: An Inquiry into the Nature and Implications of Expertise. Chicago: Open Court. Bereiter, C., & Scardamalia, M. (2016). “Good moves” in knowledge-creating dialogue. QWERTY – Open and Interdisciplinary Journal of Technology, Culture and Education, 11(2), 12–26. Bereiter, C., & Scardamalia, M. (2025). Generative AI’s particular contributions to Knowledge Building. QWERTY – Interdisciplinary Journal of Technology, Culture and Education. https://doi.org/10.30557/QW000099 Burns, M., Winthrop, R., Luther, N., Venetis, E., & Karim, R. (2026). A new direction for students in an AI world: Prosper, prepare, protect. Center for Universal Education, Brookings Institution. Carlone, H. B., & Mercier, A. K. (2026). Identity play: Middle school youths' provisional self- making in horizon-expanding STEM spaces. Science Education 0: 1–23. 19 Chan, C. K. K., & van Aalst, J. (2018). Knowledge building: Theory, design, and analysis. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International Handbook of the Learning Sciences (1st ed., p. 295-307). Routledge. Chen, B. (2025). Beyond tools: Generative AI as epistemic infrastructure in education. arXiv. https://arxiv.org/abs/2504.06928 Chen, B., & Zhang, J. (2016). Analytics for knowledge creation: Towards agency and design- mode thinking. Journal of Learning Analytics, 3(2) 139-164. Chen, B., Scardamalia, M. & Bereiter, C. (2015). Advancing knowledge‐building discourse through judgments of promising ideas. International Journal of Computer-Supported Collaborative Learning, 10, 345–366. https://doi.org/10.1007/s11412-015-9225-z Chen, B., Zhu, X., & Díaz del Castillo, F. (2023). Integrating generative AI in knowledge building: An exploratory study of ChatGPT use across inquiry stages. Computers and Education: Artificial Intelligence, 5, 100184. https://doi.org/10.1016/j.caeai.2023.100184 Choi, S, Kang, H, Kim, N, & Kim, J. (2025). The dual edges of AI: Advancing knowledge while reducing diversity. PNAS Nexus, 4(5), pgaf138. https://doi.org/10.1093/pnasnexus/pgaf138 Damşa, C. I., Kirschner, P. A., Andriessen, J. E. B., Erkens, G., & Sins, P. H. M. (2010). Shared epistemic agency: An empirical study of an emergent construct. Journal of the Learning Sciences, 19(2), 143–186. Deng, R., Jiang, M., Yu, X., Lu, Y., & Liu, S. (2025). Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Computers & Education 227, 105224. https://doi.org/10.1016/j.compedu.2024.105224 Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087- 1101. https://doi.org/10.1037/0022-3514.92.6.1087. Dweck, C. S., Walton, G. M., & Cohen, G. L. (2014). Academic Tenacity: Mindsets and Skills that Promote Long-Term Learning. Bill & Melinda Gates Foundation. Engelbart, D. C. (1963). A conceptual framework for the augmentation of man’s intellect. In P. W. Howerton, & D. C. Weeks (Eds.), Vol. 1. Vistas in information handling: The augmentation of man’s intellect by machine (p. 1–29). Spartan Books. Engle, R. A., Lam, D. P., Meyer, X. S., & Nix, S. E. (2012). How does expansive framing promote transfer? Several proposed explanations and a research agenda for investigating them. Educational Psychologist, 47(3), 215–231. European Commission. (2024). AI Act (Regulation (EU) 2024/1689): Risk-based requirements, transparency, and oversight for trustworthy AI. Fan, Y., Tang, L., Le., H., Shen, K., Tan, S., Zhao, X., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56 (2), 489-530. https://doi.org/10.1111/bjet.13544. Feng, S. (2025). Group interaction patterns in generative AI-supported collaborative problem solving: Network analysis of the interactions among students and a GAI chatbot. British Journal of Educational Technology, 56, 2125–2145. https://doi.org/10.1111/bjet.13611 Gardner, H. E. (2000). Intelligence reframed: Multiple intelligences for the 21st century. Hachette UK. Gardner, H. E. (2006). Five minds for the future. Harvard Business Press. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15 (1), 6. https:// doi.org/10.3390/soc15010006 20 Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5, Article 908261. https://doi.org/10.3389/frai.2022.908261 Habib, A., Alghamdi, A., & Alzahrani, S. (2024). How does generative artificial intelligence impact divergent thinking and creative engagement? Journal of Creativity. 34(1). Advance online publication. Hadwin, A., Järvelä, S., and Miller, M. (2018). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In J. A. Greene (Ed)., Handbook of Self- Regulation of Learning and Performance, 2nd ed (p. 83–106). Routledge. Harris, E. M., Dixon, C. G. H., Bird, E. B., & Ballard, H. L. (2020). For science and self: Youth interactions with data in community and citizen science. Journal of the Learning Sciences, 29(2), 224–263. Hou, C., Zhu, G., Liu, Y., Sudarshan, V., Chong, J. L., Zhang, F. Y., Tan, M. Y. H., & Ong, Y. S. (2026). The effects of critical thinking intervention on reliance behaviors, problem- solving quality, and creativity during human-Generative AI collaborative learning. Computers & Education, 247, 105576. https://doi.org/10.1016/j.compedu.2026.105576 Hutchins, E. (1995) Cognition in the Wild. Cambridge, MA: MIT Press. Järvelä, S. , Nguyen, A. , & Hadwin, A. (2023). Human and artificial intelligence collaboration for socially shared regulation in learning. British Journal of Educational Technology, 54, 1057–1076. https://doi.org/10.1111/bjet.13325 Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., et al., and Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of Large Language Models for education. Learning and Individual Differences, 103,102274. doi:10.1016/j.lindif.2023.102274. Kosmyna, N., Hauptmann, E., Yuan,Y.T., Situ, J., Liao, X-H., Beresnitzky, A.V, Braunstein, I, & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for writing. arXiv preprint. https://arxiv.org/abs/2506.08872 Lee, A. V. Y., Tan, S. C., & Teo, C. L. (2023). Designs and practices using generative AI for sustainable student discourse and knowledge creation. Smart Learning Environments, 10(1), 59. https://doi.org/10.1186/s40561-023-00279-1 Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative ai on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3706598.3713778 Lin, Z., & Riedl, M. (2023). An ontology of co-creative AI systems. In Proceedings of the 2024 NeurIPS Workshop on Machine Learning for Creativity and Design (37th Conference on Neural Information Processing Systems). https://arxiv.org/pdf/2310.07472 Linn, M., Lee, O., Hmelo-Silver, C., Lester, J., & Zhai, X. (2025, April). Using Artificial Intelligence to Facilitate Science Learning Opportunities. A symposium discussion at the Annual Meeting of American Educational Research Association, Philadelphia, PA. Liu, N., Bradford, B., Hatchett, J., Diaz, G., Luzi, L., Wang, Z., Basu Mallick, D., & Baraniuk, R. (2025). Learning Context: A unified framework and roadmap for context-aware AI in education (arXiv:2512.24362). arXiv. https://doi.org/10.48550/arXiv.2512.24362 21 Lucas, B. (2016). A five-dimensional model of creativity and its assessment in schools. Applied Measurement in Education, 29(4), 278–290. Magnussen, R., & Hod, Y. (2023). Bridging communities and schools in Urban development: community and citizen science. Instructional Science, 51, 887–911. Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence, 3, 100056. https://doi.org/10.1016/j.caeai.2022.100056 McKinsey & Company. (2025). The state of AI in 2025: Agents, innovation, and transformation. McKinsey Global Publishing. Online available at https://w.mckinsey.com/capabilities/ quantumblack/our-insights/the-state-of-ai. Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO Publishing. https://doi.org/10.54675/EWZM9535 Miller, E., Manz, E., Russ, R., Stroupe, D., & Berland, L. (2018). Addressing the epistemic elephant in the room: Epistemic agency and the next generation science standards. Journal of Research in Science Teaching, 55(7), 1053–1075. Mills, K., Ruiz, P., Lee, K., Coenraad, M , Fusco, J., Roschelle, J. & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi org/10 51388/20 500 12265/218 Paavola, S., & Hakkarainen, K. (2005). The knowledge creation metaphor – an emergent epistemological approach to learning. Science and Education, 14(6), 535–557. https://doi org/10.1007/s11191-004-5157-0 Park, H., & Zhang, J. (2025). Teacher noticing to scaffold knowledge-building inquiry in two grade 5 classrooms. Instructional Science, 53(4), 567-601. https://doi.org/10.1007/s11251- 024-09703-6. Pea, R. D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon (Ed.), Distributed cognitions (p. 47-87). New York: Cambridge University Press. Pendleton-Jullian, A. M., & Brown, J. S. (2018). Design unbound: Designing for emergence in a white water world. Boston, MA: The MIT Press. Penuel, W. R., Philip, T. M., Chang, M.A., Sumner, T., & D’Mello, S.K. (2025). Responsible innovation in designing ai for education: Attending to the ‘how,’ the ‘for what,’ the ‘for whom,’ and the ‘with whom’ in a rapidly growing field. Educational Researcher. https://doi.org/10.3102/0013189X251389879 Perkins, D., Tishman, S., Ritchhart, R., Donis, K., & Andrade, A. (2000). Intelligence in the wild: A dispositional view of intellectual traits. Educational Psychology Review, 12(3), 269–293. https://doi.org/10.1023/A:1009031605464 Popper, K. R. (1972). Objective knowledge: An evolutionary approach. Oxford, England: Oxford University Press. Resnick, M. (2024). Generative AI and creative learning: Concerns, opportunities, and choices. MIT GenAI. https://mit-genai.pubpub.org/pub/gj6eod3e Riedl, C., Savage, S., & Zvelebilova, J. (2025). AI’s social forcefield: Reshaping distributed cognition in human–AI teams. arXiv pre-print. https://arxiv.org/abs/2407.17489 Sawyer, R.K. (2015). A Call to Action: The Challenges of Creative Teaching and Learning. Teachers College Record, 117 (10), 1-34. 22 Scardamalia , M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed), Liberal education in a knowledge society (p. 67–98). Chicago, IL: Open Court. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., p. 397–417). New York, NY: Cambridge University Press. Schmager, S., Pappas, I. O., & Vassilakopoulou, P. (2025). Understanding Human-Centred AI: a review of its defining elements and a research agenda. Behaviour & Information Technology, 44(15), 3771–3810. Schwartz, D. L. (2025). Achieving an adaptive learner. Educational Psychologist, 60(1), 7–22. Shen, J. H., & Tamkin, A. (2026). How AI Impacts Skill Formation. Preprint: https://arxiv.org/abs/2601.20245. Siemens, G., Marmolejo-Ramos, F., Gabriel, F., Medeiros, K., Marrone, R., Joksimovic, S., & De Laat, M. (2022). Human and artificial cognition. Computers and Education: Artificial Intelligence, 3, 100107. https://doi.org/10.1016/j.caeai.2022.100107 Stanford HAI. (2022). Human-centered approach to AI in the age of AI revolution. Stanford University Human-Centered AI. https://hai.stanford.edu/news/human-centered-approach- ai-revolution Tao, D., & Zhang, J. (2021). Agency to transform: How did a Grade 5 community co-configure dynamic knowledge building practices in a yearlong science inquiry? International Journal of Computer-Supported Collaborative Learning, 16(3), 403–434. Underwood, T., Zhang, J., Guo, D., Song, G., Espinosa, K., Ding, L., Hower, L., Muirhead, J., & Kerr, J. (2025). A design experiment on Knowledge Building in the real world: High school students work with community partners to investigate authentic challenges. QWERTY - Interdisciplinary Journal of Technology, Culture and Education, 20(2), 45-79. https://doi.org/10.30557/QW000101 U.S. Department of Education, Office of Educational Technology (2023). Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC: U.S. Department of Education. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO. Vokatis, B., Zhang, J., Sun, Y., & Bogouslavsky, M. (2026). An expanded view of literacy for knowledge creation: How do students read, write, and interact to build scientific knowledge across classrooms? Information and Learning Sciences. https://doi.org/10.1108/ILS-09- 2024-0120 World Economic Forum. (2025). The Future of Jobs Report 2025. Online available at https://w.weforum.org/publications/the-future-ofjobs-report-2025/. Wu, J.-Y., Lee, Y.-H., Chai, C. S., & Tsai, C.-C. (2025). Strengthening human epistemic agency in the symbiotic learning partnership with generative artificial intelligence. Educational Researcher, 54 (6), 358-368. Yang, Y., Yuan, K., Zhu, G., & Jiao, L. (2024). Collaborative analytics-enhanced reflective assessment to foster conducive epistemic emotions in knowledge building. Computers & Education, 209, 104950. Yuan, G., Zhang, J., & Chen, M.-C. (2022). Cross-community knowledge building with Idea Thread Mapper. International Journal of Computer-Supported Collaborative Learning, 17(2), 293–326. https://doi.org/10.1007/s11412-022-09371-z 23 Zhai, X., & Krajcik, J. (2024). Pseudo Artificial Intelligence Bias. In X. Zhai & J. Krajcik (Eds.), Uses of Artificial Intelligence in STEM Education (p. 568–578). Oxford Academic. https://doi.org/10.1093/oso/9780198882077.003.0025 Zhang, J., Scardamalia, M., Reeve, R., & Messina, R. (2009). Designs for collective cognitive responsibility in knowledge building communities. Journal of the Learning Sciences, 18(1), 7–44. https://doi.org/10.1080/10508400802581676 Zhang, J., Tao, D., Chen, M.-H., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with Idea Thread Mapper. Journal of the Learning Sciences, 27(3), 390–430. https://doi.org/10.1080/10508406.2018.1444992 Zhang, J., Tian, Y., Yuan, G., & Tao, D. (2022). Epistemic agency for co-structuring expansive knowledge building practices. Science Education, 106(4), 890–923. https://doi.org/10.1002/sce.21717 Zhang, J., Yuan, G., & Bogouslavsky, M. (2020). Give student ideas a larger stage: Support cross-community interaction for knowledge building. International Journal of Computer- Supported Collaborative Learning, 15(4), 389–410. Zhu, G., Scardamalia, M., Moreno, M., Martins, M., Nazeem, R., & Lai, Z. (2022). Discourse moves and emotion in knowledge building discourse and metadiscourse. Frontiers of Education, 7, 900440. https://doi.org/10.3389/feduc.2022.900440