The Great Reorientation: Discipline, Curiosity, and Human Agency in the Post-Scarcity Knowledge Economy – RESEARCH & PODCAST SERIES 2026 + BOOK RELEASE


Executive Summary

The transition from the Information Age to the Age of Artificial Intelligence represents a fundamental shift in the architecture of human capability. For centuries, the primary bottleneck to human advancement was the scarcity of knowledge and the high cost of its transmission. Educational institutions were constructed as gatekeepers, and the acquisition of specialized information was the hallmark of the elite. Today, as generative artificial intelligence (GenAI) commoditizes high-level cognitive labor and provides instantaneous access to the sum of human knowledge, the primary determinant of success has migrated from the external access to information to the internal cognitive architecture of the individual. This research report evaluates the thesis that in an era of free knowledge, the dual virtues of discipline and curiosity have emerged as the primary drivers of learning and human capability.

Through an interdisciplinary analysis encompassing cognitive psychology, behavioral economics, and education science, this report identifies a “double-edged sword” effect inherent in AI-augmented learning. While AI provides personalized scaffolding and real-time feedback that can enhance self-directed learning, it also risks inducing “cognitive outsourcing,” where the ease of technological solutions erodes critical thinking and the “effort-reward cycle” essential for deep neurological retention.1 Empirical evidence from workforce productivity studies reveals a “productivity placebo,” where users perceive significant gains (often 20-24%) that are not always reflected in objective performance metrics due to increased complexity and the introduction of “AI-authored” errors.3

The future of universities and schooling is being reimagined as a shift from content delivery to “experience hubs” where reasoning is operationalized through real-world problem-solving.4 However, a new “discipline gap” is emerging: individuals with high self-control and epistemic curiosity are leveraging AI to compound their capabilities, while those with lower self-regulatory mechanisms are falling into “AI dependency,” characterized by superficial reasoning and a loss of agency.5 This report concludes that policy must shift from “individual responsibilisation” to “shared systemic responsibility,” prioritizing the cultivation of non-automatable human skillsโ€”judgment under ambiguity, ethical reasoning, and empathetic leadershipโ€”as the ultimate differentiators in a machine-augmented world.6

Historical Context of Education

Knowledge Scarcity and the Gatekeeper Model

The historical trajectory of education has been defined by the physical and economic constraints of information. In the pre-modern era, knowledge was geographically and socially tethered to specific repositories and clerical authorities. From the Lyceum of Aristotle to the Library of Alexandria, libraries were not merely resources but instruments of power that gatekept access to the “book”.9 For centuries, the production, storage, and dissemination of knowledge were controlled by ecclesiastical authorities who scrutinized texts for heresies and inconsistencies.9 The “stranglehold” on education only began to loosen with the advent of mass-production technologies like the printing press, which allowed for the dissemination of multiple texts to a wider reading public, effectively making the clerical monitoring of every copy impossible.9

Despite the loosening of clerical control, the “scarcity model” remained the foundation of the modern university. The Humboldtian model of the research university, emerging in early 19th-century Germany, centralized pure and applied research across academic disciplines.10 This model assumed that because information was expensive to store and transmit, it made logical sense to concentrate resourcesโ€”experts, books, and lecture hallsโ€”in one location, forcing students to travel to the knowledge.11 The lecture theatre, a design intended to maximize the transmission of information from one expert to many students, remains a vestige of this era of information scarcity.11

Global Models and the Industrial Transition

The purpose of schooling has evolved in tandem with political and economic shifts. In the 17th century, the rise of nation-states transformed schooling into a tool for building national identities and creating loyal subjects rather than monarchal subordinates.12 By the 18th and 19th centuries, the Industrial Revolution shifted the objective toward economic utility. Philosophers like Adam Smith advocated for mass schooling as a prerequisite for a free-market economy, viewing education as a means to provide the skills necessary for the modern workforce.12

Educational ModelPrimary DriverCore MetricInstitutional Role
Clerical/ScholasticPreservation of DoctrineOrthodoxyGatekeeper of Truth
Humboldtian/ResearchKnowledge CreationSpecialized ExpertiseRepository of Resources
Industrial/MassEconomic ProductivityStandardized TestingProducer of Labor
AI-Augmented/ModernHuman AgencyCompetency/OutcomeFacilitator of Experience

Traditional systems like the Keju system in China (dating to 206 BCE) or the dabiristan schools in the Middle East focused on creating well-rounded civil servants through a rigorous curriculum of philosophy, history, and literature.12 However, the Western colonial expansion often displaced these indigenous models with a standardized form of schooling designed to modernize and control local populations.12 Today, every nation-state has largely adopted this model, yet the rise of AI threatens its fundamental assumption: that the university is the “undisputed center of knowledge”.13

The Evolution of the Academic Record

The formalization of education led to the creation of the student record, a mechanism designed to quantify and standardize the “educated person.” In the mid-19th century, record-keeping was inconsistent, but the adoption of the Carnegie Unit solidified a hierarchy among educational levels and institutions.15 This transition from variation to uniformity was not merely administrative; it was an attempt to define institutional status and “belonging” in a world where credentials had become the primary currency of the professional class.15 As AI begins to provide “job-ready skills in months rather than years,” the justification for four-year undergraduate degrees is being questioned, signaling the end of the credential’s dominance as a proxy for capability.16

The AI Knowledge Revolution

The Democratization of Expertise

Artificial intelligence signifies a “quantum leap” in the democratization of knowledge. Unlike the early digital age, which made data accessible, AI makes the interpretation of that data accessible. In the biomedical sciences, for example, the solution to the “protein folding problem” demonstrated that AI could learn from vast datasets to provide insights previously restricted to sub-field specialists.17 We are moving from a trend of “data democratization” to one of “knowledge democratization,” where explicit knowledge representations allow multidisciplinary teams and even non-experts to apply complex scientific models.17

This shift transforms information from a “privilege” of those within elite institutions to a “commodity” available to anyone with a high-velocity internet connection. AI-powered systems provide personalized instruction that adapts to a student’s individual pace and style, facilitating access to high-quality resources in marginalized or remote communities.18 The intelligent tutor, a concept once restricted to the wealthiest students, is now a scalable reality, potentially democratizing the “renaissance” of teaching and learning.18

The End of Information Scarcity

The transition from information scarcity to information abundance has reached its zenith with generative AI. Information transmissionโ€”the traditional role of teachingโ€”was once best done where the information was held.11 Today, wireless access to the internet has made information cheap and ubiquitous. AI tools act as “knowledge navigators,” filtering and integrating vast amounts of information to provide immediate feedback, effectively reducing the time spent on the “omission or negligence” of data caused by human fatigue.11

However, the “free” nature of this knowledge introduces a new risk: the devaluation of the learning process itself. When the “why” behind a concept is bypassed in favor of a direct answer from an AI, learners may struggle to apply knowledge meaningfully in real-world settings.1 The cognitive “hooks” required for long-term retention are often formed through the effort of discovery; when AI provides the answer without the struggle, the sustainability of that learning is undermined.1

FeatureScarcity Model (Traditional)Abundance Model (AI-Era)
TransmissionLinear (Expert to Student)Adaptive (AI to Learner)
AccessLocalized (University/Library)Ubiquitous (Anywhere/Anytime)
CostHigh (Tuition/Time)Marginal (Nominal/Instant)
ValuationPossession of KnowledgeApplication/Agency

Ethical and Systemic Boundaries

The democratization of education through AI is not an inevitable outcome of the technology itself but a result of policy and ethical design. Critics argue that the claim AI will “democratize” education is often speculative and ignores the digital divide.19 Pre-existing social inequalities mean that while some students use AI as a “sophisticated assistant,” others are left behind due to a lack of resources or digital literacy.19 Furthermore, the individualism inherent in many AI educational toolsโ€”focusing on mastery-based learningโ€”can work against the democratic goals of communication and collaboration envisioned by educationalists like John Dewey.21

The Discipline Gap in Learning

The “Double-Edged Sword” of Cognitive Intensity

The relationship between AI usage and human capability is characterized by a “double-edged sword” effect. Moderate AI intensity can enhance Self-Directed Learning (SDL) by providing personalized goal management, real-time error correction, and motivation reinforcement.2 For example, in programming, AI can instantly locate a logical loophole, helping students optimize their strategies and improve the efficiency of their code.2 This “technological empowerment” allows students to expand their cognitive boundaries and explore interdisciplinary subjects that were previously too complex to enter.2

Conversely, an “over-reliance” on AI leads to a phenomenon known as “cognitive outsourcing” or “cognitive offloading”.2 When students rely on AI to generate responses without engaging in deep thought, they bypass the neural pathways essential for critical thinking and creativity.1 This leads to:

  • Cognitive Fragmentation: The inability to sustain a coherent line of reasoning over a long period.2
  • Technology-Driven Disempowerment: A loss of subjectivity where the learner feels unable to solve problems without the assistance of an algorithm.2
  • Diminished Goal-Setting: A reduction in the effort required to plan and evaluate one’s own learning strategies.2

Self-Control as the Primary Mediator

A critical finding in recent behavioral science is the negative relationship between ChatGPT usage and self-control. Higher usage of generative AI tools is significantly associated with lower levels of both self-control and academic wellbeing.5 Individuals with higher levels of self-control exhibit a reduced tendency to develop an over-reliance on AI, suggesting that self-regulatory mechanisms are the key to fostering a “balanced and mindful engagement” with technology.5

Mediation analysis demonstrates that self-control acts as a behavioral regulatory mechanism. High AI usage can lead to a decline in self-control, which subsequently causes a drop in academic wellbeing and feelings of achievement.5 This suggests that the “discipline” to use AI as a scaffold rather than a substitute is the most important factor in determining whether the technology elevates or erodes a student’s capability.2

Student ProfileAI Interaction StyleCognitive Result
High Self-ControlSelective AugmentationEnhanced Metacognition
Low Self-ControlTotal SubstitutionCognitive Inertia
High CuriosityExplanatory InterrogationDeep Conceptual Learning
Low CuriosityDirect Answer SeekingSurface-Level Rote Learning

The “Boundary-Adaptive Pairing” Model

To mitigate the risks of cognitive offloading, researchers propose a “boundary-adaptive pairing” model. This model advocates for AI intervention strategies that are tailored to a learner’s metacognitive level.2 In this framework, AI is assigned “low-order tasks” such as knowledge retrieval, basic computation, and data processing, while “high-order tasks”โ€”goal planning, innovation, and the construction of theoretical frameworksโ€”remain strictly under human control.2 By establishing these boundaries, educators can ensure that technological empowerment does not come at the cost of human autonomy.22

AI-Augmented Productivity

Quantifiable Gains and the “Productivity Placebo”

The impact of AI on workforce productivity is one of the most rigorously studied aspects of the current technological revolution. In software development, large-scale experiments by MIT, Microsoft, and Accenture involving over 4,800 developers showed a 26.08% increase in completed tasks for those using AI coding assistants.3 Similarly, a cohort analysis of 300 engineers found a 31.8% reduction in Pull Request (PR) review cycle time and a significant increase in the volume of code shipped to production.25

However, there is a distinct “productivity plateau.” While onboarding time (the time to the 10th PR) has been cut in half, overall productivity gains for many organizations have leveled off at around 10%.26 This suggests a “productivity placebo”: developers often feel they are 20-24% faster even when their measurable gains are marginal or even negative.3 This gap between perception and reality is driven by the fact that while AI can “scaffold” multiple tasks, it increases the cognitive load of “juggling” more parallel workstreams, leading to more time spent on reviews and less on high-value shipping.3

The Quality and Security Gap

The “discipline” to use AI responsibly is nowhere more apparent than in the realm of security and code quality. While AI generates code at machine speed, it often lacks a nuanced understanding of context. Research from 2024 and 2025 reveals:

  • Vulnerability Spikes: AI-generated code introduced 322% more privilege escalation paths and 153% more design flaws than human-written code.3
  • The “70% Problem”: AI performs well on the first 70% of a task (the boilerplate), but the remaining 30% (the complex logic and integration) still requires intense human effort.3
  • Dopamine vs. Reality: The “dopamine hit” of seeing a finished draft can lead developers to trust AI output too much, skipping the rigorous auditing necessary for production-ready software.3
StudyPopulationProductivity Gain (%)Key Finding
MIT/Accenture (2024)4,867 Developers26.08%Benefit strongest for junior/lower-ability workers.
DeputyDev (2025)300 Engineers31.8% (Time red.)61% increase in code volume pushed to production.
Pragmatic Research (2026)121,000 Devs10.0%Productivity plateaued despite 93% adoption.
Faros AI (2025)10,000 Devs9% (Task incr.)Developers juggling 47% more parallel pull requests.

The Upskilling Divide

PwC’s 2025 Global Workforce Survey highlights a massive divide between daily GenAI users and the rest of the workforce. Daily users report being more productive (92% vs. 58%) and feeling more job security (58% vs. 36%) and seeing higher salaries (52% vs. 32%) than infrequent users.27 Perhaps most tellingly, 75% of daily users feel they have the resources needed for learning and development, compared to only 59% of infrequent users.27 This suggests that those who have the “discipline” to weave AI into their daily habits are pulling ahead, creating a self-reinforcing cycle of optimism and capability.28

The Future Role of Schools

Reimagining the University as an “Experience Hub”

As AI takes on the “hard intellectual labor” of analysis and argumentation, the role of higher education must shift from the delivery of content to the cultivation of agency.4 Universities are being challenged to become “institutions dedicated to helping human beings figure out how to live well and act responsibly in a complex world”.16 This involves a “reorientation” around distinctively human capabilities: ethical judgment, creative vision, and empathetic connection.16

The 19th-century educator Cardinal Newman argued for education that provides a “clear conscious view of oneโ€™s own opinions.” Today, this is being operationalized through experiential learning models like the Network for Teaching Entrepreneurship (NFTE).4 In these settings, students are not “lectured” on collaboration or critical thinking; they must practice them to get anything done.4 This “modern apprenticeship for agency” turns abstract ideals into practiced habits, ensuring that human capabilities are not left to chance but intentionally cultivated through real-world experience.4

Assessment Reform and the Return to Oral Tradition

The rise of generative AI has made traditional written assessments (essays and exams) increasingly inadequate.13 AI can produce college-level assignments that 86% of instructors cannot detect as machine-generated.13 In response, institutions are exploring:

  • Oral Disputations: A return to spoken examinations to verify depth of thought and authenticity.29
  • Process-Oriented Grading: Evaluating the “how” of a projectโ€”the prompts used, the interrogation of the AI, and the iterationsโ€”rather than just the final “output”.4
  • Algorithmic Literacy: Shifting from “digital literacy” to a deeper understanding of how AI models work, ensuring students can “interrogate and apply AI responsibly”.13

The Economics of the Hybrid Campus

The economic model of traditional higher education is under pressure from “commoditized online courses” and “alternative credential providers”.14 Universities must now balance their traditional mission of teaching and research with the need to distinguish themselves in a “borderless market”.14 This has led to the rise of “flipped classrooms” and a greater focus on “high-quality contact time,” where in-person interactions are reserved for experimentation and discussion rather than information delivery.14

FeatureTraditional SchoolingAI-Integrated Schooling
Primary GoalKnowledge AcquisitionCultivation of Agency
Teacher RoleContent ExpertPedagogical Leader/Coach
AssessmentProduct-Based (Essay/Exam)Process-Based (Viva Voce/Practice)
Learning ModeAbstract/TheoreticalExperiential/Operationalized

The Emerging AI Learning Divide

The Discipline Gap and Socioeconomic Stratification

The “AI learning divide” is not merely about technological access; it is about the “discipline gap.” While the “promise of AI as an equalizer” is real, it risks deepening existing inequalities.13 Students from well-resourced backgrounds, who have access to the latest devices and a culture that supports self-regulation, use AI to “compound their creativity at unprecedented machine speed”.7 In contrast, students who lack “algorithmic literacy” or the “discipline to use it” may find themselves falling behind, using AI as a crutch rather than a tool for growth.13

This divide is exacerbated by the “upskilling gap” in the workforce. Senior executives are far more likely to feel they have the resources for learning (72%) than non-managers (51%).27 If organizations do not “redesign work” to be more inclusive and provide “clear everyday use cases,” the benefits of AI will remain concentrated among a small elite of daily users.27

The Neuroscience of Curiosity-Driven Learning

Curiosity is a powerful “internal driver” for continued exploration. Neurological research from UC Davis shows that when curiosity is sparked, there is increased activity in the hippocampus (memory creation) and the brain’s reward and pleasure circuits (dopamine).32 This “vortex” effect allows the brain to suck in not only the information it is motivated to learn but also unrelated information, making the entire learning experience more effective.32

However, the “discipline” required in an AI-rich environment is the ability to maintain this curiosity in the face of “easy answers.” If a student uses AI to “quell curiosity” before it can even developโ€”by asking for the answer instead of exploring the questionโ€”the learning process is effectively short-circuited.32 Therefore, the greatest educational advantage is not the ability to find the information, but the curiosity to look deeper and the discipline to engage in the “pain and drudgery” that higher-order thinking still requires.34

Human Intelligence vs. Artificial Intelligence

Judgment Under Ambiguity

The “jagged frontier” of AI capability means that while machines excel at tasks with clear parameters, they struggle with “judgment in ambiguous situations”.6 Human skills like “interpreting meaning in ways AI can’t” and “weighing trade-offs in the absence of clear data” are becoming more valuable as routine execution is automated.6 For example, the biotech company Healx uses AI to predict drug efficacy, but “in-house experts” then make the final judgment call on which treatments advance to clinical trials.6

Skill CategoryAI CapabilityHuman Advantage
ProcessingMassive Scale/Pattern Rec.Contextual/Nuanced Int.
EthicsRule-Based/ProgrammaticValue-Based/Systemic
CreativityRecombination/RemixingTrue Invention/Originality
EmpathySimulated ResponsesGenuine Connection/Trust
LeadershipTask AllocationPsychological Safety/Clarity

The Moral Logic of Critical Thinking

In an AI world, thinking is not just analysis; it is a “responsibility”.36 The case of Amazonโ€™s biased resume-ranking tool demonstrates that AI can perform “accurate statistical pattern recognition” while lacking the capacity for “normative judgment”.36 The AI optimized for historical outcomes, unaware that those outcomes were ethically problematic. A human looking at the same data recognizes that “this pattern exists, but it is wrong”.36 This ability to “challenge assumptions” and “ask critical questions” about the long-term cultural consequences of an action is a uniquely human domain that cannot be automated.6

Emotional Intelligence and Relationship Building

“Emotional intelligence” (EQ) is the “ultimate differentiator” in an era of machine output. While AI can create text that sounds convincingly human, it lacks the ability to “experience empathy” or “build trust”.6 Roles that involve navigating interpersonal conflict, understanding complex motivations, and establishing authentic storytelling will see a surge in market value.6 Leaders with high EQ show retention rates nearly 30 points higher than those without it, highlighting the “human connectivity” that remains a strictly human domain.7

Implications for Education Policy

From Individual Responsibilisation to Shared Responsibility

Current policy frameworks often place the entire burden of “responsible AI use” on individual educators and students.8 This leads to “ethical paralysis,” where teachers are expected to act as “heroes in a broken system,” navigating infrastructure they neither own nor control.8 Policy must shift toward a model of “shared, systemic responsibility” that includes:

  • Infrastructural Transparency: Exposing the “black/grey boxes” of algorithmic processes to ensure accountability.8
  • Public Governance: Resolving the tension between private governance and public educational values.8
  • Institutional Accountability: Creating frameworks that distribute responsibility and reduce the individual workload of educators.8

Redefining Literacy and Assessment

Policies must prioritize the transition from “digital literacy” to “algorithmic and AI literacy”.13 This involves embedding ethical frameworks and “socio-technical reflexivity” into the curriculum.8 Furthermore, assessment policies must shift away from “standardized checklists” and toward the evaluation of “human-centric capabilities” like creativity and critical thinking.29 This requires a “joined-up strategy” and roadmap for AI implementation that evaluates the “economics of AI” to ensure value for money and equity of access.29

Global Equity and the Universal Design

To prevent the “deepening of inequality,” education policy must focus on “universal design” and “targeted financing” for those left behind.19 This includes:

  • Inclusive Tools: Developing AI that is modality- and language-agnostic to adapt to the learnerโ€™s needs.19
  • Open Educational Resources: Relying on open-source tools and shared expertise to sustain a transition to inclusion.19
  • Participatory Tools: Engaging in meaningful consultation with communities and parents to ensure that AI implementation is not a “top-down” imposition.19

Conclusion

The thesis that “discipline and curiosity become the primary determinants of human learning” in the age of AI is not merely a philosophical claim; it is a structural reality supported by cognitive psychology and workforce data. As the “clerical stranglehold” on knowledge is finally broken, the internal qualities of the learnerโ€”their ability to resist the “dopamine hit” of instant answers, their “discipline” to audit machine output, and their “curiosity” to ask the “why” behind the “what”โ€”have become the new currency of capability.

The greatest educational advantage in the AI era is no longer access to knowledge, but the agency to use it. While AI offers the potential for a “renaissance” of personalized learning, it also presents the risk of a “capability erosion” for those who lack the self-regulatory mechanisms to use it as a scaffold. Schools and universities must therefore move beyond their historical role as “gatekeepers” and become “experience hubs” that operationalize human skillsโ€”judgment, empathy, and ethical reasoningโ€”that machines cannot replicate. The future of human intelligence lies not in competing with AI, but in the “human-machine partnership,” where machine speed is guided by human wisdom.

Research prepared for: Di Tran University, Research & Education Series.

Works cited

  1. How AI quietly undermines the joy and effort of learning: a call for …, accessed March 15, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12333830/
  2. Impact of Artificial Intelligence-assisted Learning Intensity on College Students’ Self-directed Learning Ability – ResearchGate, accessed March 15, 2026, https://www.researchgate.net/publication/395587836_Impact_of_Artificial_Intelligence-assisted_Learning_Intensity_on_College_Students’_Self-directed_Learning_Ability
  3. The Productivity Paradox of AI Coding Assistants – Cerbos, accessed March 15, 2026, https://www.cerbos.dev/blog/productivity-paradox-of-ai-coding-assistants
  4. In the age of AI, human skills are the new advantage | World …, accessed March 15, 2026, https://www.weforum.org/stories/2026/01/ai-and-human-skills/
  5. Harnessing Self-Control and AI: Understanding ChatGPT’s Impact …, accessed March 15, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12466394/
  6. The Most Important Human Skills AI Can’t Replace – HBS Online, accessed March 15, 2026, https://online.hbs.edu/blog/post/human-skills-ai-cant-replace
  7. 7 Human Skills AI Just Made Irreplaceable | Beam AI, accessed March 15, 2026, https://beam.ai/agentic-insights/7-human-skills-ai-cant-replace-in-2026
  8. Reframing AI ethics in education: From individual responsibilisation …, accessed March 15, 2026, https://think.taylorandfrancis.com/special_issues/reframing-ai-ethics-in-education-from-individual-responsibilisation-to-shared-responsibility/
  9. Gatekeepers of Knowledge – National Academic Digital Library of Ethiopia, accessed March 15, 2026, http://ndl.ethernet.edu.et/bitstream/123456789/26151/1/46.pdf
  10. Full article: Universities as Anarchic Knowledge Institutions – Taylor & Francis, accessed March 15, 2026, https://www.tandfonline.com/doi/full/10.1080/02691728.2023.2283444
  11. Information Scarcity – Thoughts about Higher Education, accessed March 15, 2026, https://hethoughts.wordpress.com/2012/02/09/information-scarcity/
  12. Why understanding the historical purposes of modern schooling matters today | Brookings, accessed March 15, 2026, https://www.brookings.edu/articles/why-understanding-the-historical-purposes-of-modern-schooling-matters-today/
  13. The quiet transformation of higher education in the AI era – PMC – NIH, accessed March 15, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12438950/
  14. THE FUTURE OF HIGHER EDUCATION – Arthur D. Little, accessed March 15, 2026, https://www.adlittle.com/sites/default/files/reports/ADL_Future_of_higher_education_2024.pdf
  15. A Brief History of the Student Record – Ithaka S+R, accessed March 15, 2026, https://sr.ithaka.org/publications/a-brief-history-of-the-student-record/
  16. THE FUTURE OF HIGHER EDUCATION IN THE AGE OF AI: A New Paradigm for Leaders and Policymakers – ResearchGate, accessed March 15, 2026, https://www.researchgate.net/publication/398660873_THE_FUTURE_OF_HIGHER_EDUCATION_IN_THE_AGE_OF_AI_A_New_Paradigm_for_Leaders_and_Policymakers
  17. AI and the democratization of knowledge – PMC – NIH, accessed March 15, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10915151/
  18. Artificial Intelligence in the Classroom: Democratizing Knowledge and Transforming Education – IDEAS/RePEc, accessed March 15, 2026, https://ideas.repec.org/a/dbk/medicw/v4y2025ip469id469.html
  19. Artificial Intelligence Alone Will Not Democratise Education: On Educational Inequality, Techno-Solutionism and Inclusive Tools – MDPI, accessed March 15, 2026, https://www.mdpi.com/2071-1050/16/2/781
  20. The Application and Role of AI in Promoting Students’ self-directed Learning – Atlantis Press, accessed March 15, 2026, https://www.atlantis-press.com/article/126021322.pdf
  21. Critical Studies of Education & Technology: Why AI Will Not Democratize Education (notes on Wieczorek 2025), accessed March 15, 2026, https://nepc.colorado.edu/blog/why-ai
  22. Impact of Artificial Intelligence-assisted Learning Intensity on …, accessed March 15, 2026, https://www.shs-conferences.org/articles/shsconf/pdf/2025/13/shsconf_icepcc2025_01002.pdf
  23. artificial intelligence and college students’ self-directed learning: a review of opportunities – Upubscience Publisher, accessed March 15, 2026, https://www.upubscience.com/upload/20260228161406.pdf
  24. The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers* – MIT Economics, accessed March 15, 2026, https://economics.mit.edu/sites/default/files/inline-files/draft_copilot_experiments.pdf
  25. Intuition to Evidence: Measuring AI’s True Impact on Developer Productivity – arXiv.org, accessed March 15, 2026, https://arxiv.org/html/2509.19708v1
  26. This CTO Says 93% of Developers Use AI, but Productivity Is Still 10% – ShiftMag, accessed March 15, 2026, https://shiftmag.dev/this-cto-says-93-of-developers-use-ai-but-productivity-is-still-10-8013/
  27. Daily GenAI users see higher pay, job security and productivity …, accessed March 15, 2026, https://www.pwc.com/gx/en/news-room/press-releases/2025/pwc-2025-global-workforce-survey.html
  28. New Research Says Daily AI Users Earn More and Advance Faster | SUCCESS, accessed March 15, 2026, https://www.success.com/daily-ai-users-career-research
  29. AI and the Future of Universities – HEPI, accessed March 15, 2026, https://www.hepi.ac.uk/reports/right-here-right-now-new-report-on-how-ai-is-transforming-higher-education/
  30. Artificial Intelligence and the Future of Higher Education, Part 1 – AGB, accessed March 15, 2026, https://agb.org/trusteeship-article/artificial-intelligence-and-the-future-of-higher-education-part-1/
  31. IMPACT OF ARTIFICIAL INTELLIGENCE ON COGNITIVE LEARNING PROCESSES IN UNIVERSITY STUDENTS | TPM, accessed March 15, 2026, https://tpmap.org/submission/index.php/tpm/article/view/3116
  32. Why Curiosity Enhances Learning | Edutopia, accessed March 15, 2026, https://www.edutopia.org/blog/why-curiosity-enhances-learning-marianne-stenger?_=undefined
  33. How Classroom Curiosity Affects College Students’ Creativity? – MDPI, accessed March 15, 2026, https://www.mdpi.com/2227-7102/15/9/1101
  34. A Philosopher’s Reflections on Teaching in a World with AI – Daily Nous, accessed March 15, 2026, https://dailynous.com/2025/03/07/a-philosophers-reflections-on-teaching-in-a-world-with-ai/
  35. 7 Human Skills AI Can Never Replace | Workday US, accessed March 15, 2026, https://www.workday.com/en-us/perspectives/hr/2026/01/human-skills-ai-cant-replace.html
  36. Critical Thinking in an AI World: The One Human Skill That Will Not Be Automated, accessed March 15, 2026, https://www.sylvainperrier.com/critical-thinking-in-an-ai-world-the-one-human-skill-that-will-not-be-automated/
  37. The role and challenges of education for responsible AI – UCL Press Journals, accessed March 15, 2026, https://journals.uclpress.co.uk/lre/article/id/129/
Copyright 2026 Di Tran University. Design and built and created by Di Tran Enterprise Louisville Institute of Technology
Translate ยป