A Deep-Dive Analysis of Advantages, Risks & the Johari Window of AI Awareness
Evidence synthesized from 10 peer-reviewed sources (2023–2026). The research is unambiguous on one foundational point: GenAI's impact is not inherent to the technology — it is a function of how it is used.
Originally a tool for interpersonal awareness (Luft & Ingham, 1955), the Johari Window maps the awareness gaps between educators and students that determine whether GenAI helps or harms learning. Click any quadrant to explore.
Where good integration happens
What educators know and must share
The educator's challenge
The research frontier
Both educators AND students are aware of how AI is being used
When both educators and students understand how GenAI is being used and for what purpose, the technology functions as a genuine pedagogical tool. The learning gains documented in meta-analyses accrue specifically in this quadrant.
Exemplar: The mastery approach documented by Pallant et al. (2025) — requiring students to compare AI-generated definitions with their own evolved understanding after 12 weeks — exemplifies Arena-quadrant design. Both parties knew exactly what was happening and why.
How to design for the Arena:
Students know how they're using AI — educators cannot see it
Students know exactly how they're using AI — but educators often cannot detect it in the submitted output. This is not primarily a moral failure; it is a design problem created by assessment structures that reward final products over learning processes.
Key evidence: Benedek & Sziklai's (2025) natural experiment at Corvinus University documented that by 2023/24, sampled open-book exam submissions showed a median AI-content score of 100% — even in the group previously restricted from AI. Evaluators correctly identified AI-generated student texts only 19–23% of the time (Sanz-Tejeda et al., 2026).
How to close the Blind Spot:
Educators hold institutional knowledge students urgently need
The Façade quadrant represents knowledge held by educators that has not yet reached students. This asymmetry directly causes the confusion and poor choices documented in the global research literature.
What belongs in the Façade — and needs to move to the Arena:
Global survey finding: Ravšelj et al. (2025), in a survey of 23,218 students across 109 countries, found that students were significantly more confused about AI use boundaries when institutions offered no guidance — and that students themselves were demanding clarity, not resisting it.
Neither educators nor students understand the long-term effects yet
Neither educators nor students yet understand the long-term cumulative effects of GenAI use on cognitive development, professional readiness, and disciplinary thinking. Institutional AI policies are being made right now — without this evidence.
The critical gap: Longitudinal studies tracking GenAI's effects across entire programs are virtually nonexistent. The current evidence base is weighted toward short-term, single-course interventions with self-reported outcomes. Hon (2026) and Chen & Cheung (2025) both identify this as the most significant structural gap in the field.
What this means for leaders: Begin tracking how AI-integrated cohorts perform on critical reasoning and professional competency measures over time — and publish the findings. The field cannot make permanent decisions on a transient evidence base.
Wang & Zhang (2026) studied 912 students and found the relationship between AI delegation and learning depth follows a U-shaped curve. Where does most current AI use fall? Zone 2 — the worst zone for learning. Click a zone to learn more.
Learning depth (relative) — Wang & Zhang (2026), n = 912
No AI offloading — all manual
Moderate learning.
Capacity-constrained.
Scattered, half-hearted use
Worst zone for learning depth
Committed, strategic delegation
✓ Transformative learning
The learner does everything manually. Learning happens, but slowly and with capacity constraints. Every minute spent on execution leaves no freed bandwidth for higher-order reflection. AI is not part of the process.
Baseline learning. Not harmful — but misses the transformative potential of strategic AI partnership.
The learner uses AI for small assists — fixing a sentence, checking a fact, tidying a paragraph. This is where most current AI use in education sits — and it is the worst zone for learning depth.
The learner still carries almost the full cognitive load, but now adds the friction of managing AI interactions: deciding what to ask, evaluating outputs, switching context. More effort. No meaningful benefit. This is what the studies documenting cognitive decline measured.
β-quadratic = 0.102, p < 0.001 — Wang & Zhang (2026), n = 912 students across three continents
The learner delegates entire categories of substantive work to AI — all source summarization, a full first-pass literature review, complete data organization. Cognitive savings are large enough to genuinely free working memory.
That freed capacity gets invested in the work AI cannot do: critiquing frameworks, questioning assumptions, constructing original arguments, making judgment calls. This is where the paradox activates and transformative learning lives.
Treating AI as an intellectual partner to evaluate and push back against — rather than a tool — simultaneously activates both critical vigilance (β = 0.335) AND strategic delegation (β = 0.351). Both independently predict transformative learning.
Seven dimensions of GenAI's impact, each grounded in empirical research. Toggle between advantages and risks for each dimension.
10 peer-reviewed sources (2023–2026) selected for methodological rigor, citation impact, and thematic breadth.
Inspire Higher Ed works with colleges, universities, and education companies to translate evidence into actionable AI strategy.