Thought Leadership • April 2026

GenAI in Teaching

A Deep-Dive Analysis of Advantages, Risks & the Johari Window of AI Awareness

Evidence synthesized from 10 peer-reviewed sources (2023–2026). The research is unambiguous on one foundational point: GenAI's impact is not inherent to the technology — it is a function of how it is used.

10 peer-reviewed sources 23,218-student global survey 68-study meta-analysis 2023–2026

The Johari Window of GenAI in Teaching

Originally a tool for interpersonal awareness (Luft & Ingham, 1955), the Johari Window maps the awareness gaps between educators and students that determine whether GenAI helps or harms learning. Click any quadrant to explore.

Students Know
Students Don't Know
Educators Know
Educators Don't Know
Quadrant 1

Arena

Where good integration happens

Quadrant 3

Façade

What educators know and must share

Quadrant 2

Blind Spot

The educator's challenge

Quadrant 4

Unknown

The research frontier

Q1

Arena (Open) — Where Good Integration Happens

Both educators AND students are aware of how AI is being used

When both educators and students understand how GenAI is being used and for what purpose, the technology functions as a genuine pedagogical tool. The learning gains documented in meta-analyses accrue specifically in this quadrant.

Exemplar: The mastery approach documented by Pallant et al. (2025) — requiring students to compare AI-generated definitions with their own evolved understanding after 12 weeks — exemplifies Arena-quadrant design. Both parties knew exactly what was happening and why.

How to design for the Arena:

  • • Specify exactly what AI use is permitted and how outputs must be attributed or critiqued
  • • Make the learning objective explicit — not just the assignment instructions
  • • Treat AI use as a topic of shared inquiry, not a policy problem to police
Q2

Blind Spot — The Educator's Challenge

Students know how they're using AI — educators cannot see it

Students know exactly how they're using AI — but educators often cannot detect it in the submitted output. This is not primarily a moral failure; it is a design problem created by assessment structures that reward final products over learning processes.

Key evidence: Benedek & Sziklai's (2025) natural experiment at Corvinus University documented that by 2023/24, sampled open-book exam submissions showed a median AI-content score of 100% — even in the group previously restricted from AI. Evaluators correctly identified AI-generated student texts only 19–23% of the time (Sanz-Tejeda et al., 2026).

How to close the Blind Spot:

  • • Move toward process-oriented, portfolio-based, oral, and dialogic assessments
  • • Require students to explain, defend, and extend AI-assisted work in real time
  • • Treat over-reliance as an incentive structure problem — not a student character problem
Q3

Façade (Hidden) — What Educators Know and Must Share

Educators hold institutional knowledge students urgently need

The Façade quadrant represents knowledge held by educators that has not yet reached students. This asymmetry directly causes the confusion and poor choices documented in the global research literature.

What belongs in the Façade — and needs to move to the Arena:

  • • Hallucination rates: up to 46% of ChatGPT-generated references in some studies do not exist
  • • AI detection accuracy: 19–23% — meaning prohibition-based integrity strategies are largely ineffective
  • • Cognitive offloading risks and what the research shows about passive AI use
  • • The evidence on how assignment design determines whether AI helps or harms learning

Global survey finding: Ravšelj et al. (2025), in a survey of 23,218 students across 109 countries, found that students were significantly more confused about AI use boundaries when institutions offered no guidance — and that students themselves were demanding clarity, not resisting it.

Q4

Unknown — The Research Frontier

Neither educators nor students understand the long-term effects yet

Neither educators nor students yet understand the long-term cumulative effects of GenAI use on cognitive development, professional readiness, and disciplinary thinking. Institutional AI policies are being made right now — without this evidence.

The critical gap: Longitudinal studies tracking GenAI's effects across entire programs are virtually nonexistent. The current evidence base is weighted toward short-term, single-course interventions with self-reported outcomes. Hon (2026) and Chen & Cheung (2025) both identify this as the most significant structural gap in the field.

What this means for leaders: Begin tracking how AI-integrated cohorts perform on critical reasoning and professional competency measures over time — and publish the findings. The field cannot make permanent decisions on a transient evidence base.

The Strategic Offloading Model

Wang & Zhang (2026) studied 912 students and found the relationship between AI delegation and learning depth follows a U-shaped curve. Where does most current AI use fall? Zone 2 — the worst zone for learning. Click a zone to learn more.

← most current AI use Zone 1 Zone 2 Zone 3

Learning depth (relative) — Wang & Zhang (2026), n = 912

Zone 1

No AI offloading — all manual

Moderate learning.
Capacity-constrained.

Most common

Zone 2

Scattered, half-hearted use

Worst zone for learning depth

Zone 3

Committed, strategic delegation

✓ Transformative learning

Zone 1 — No Offloading

The learner does everything manually. Learning happens, but slowly and with capacity constraints. Every minute spent on execution leaves no freed bandwidth for higher-order reflection. AI is not part of the process.

Baseline learning. Not harmful — but misses the transformative potential of strategic AI partnership.

Zone 2 — Scattered, Half-Hearted Use ⚠️

The learner uses AI for small assists — fixing a sentence, checking a fact, tidying a paragraph. This is where most current AI use in education sits — and it is the worst zone for learning depth.

The learner still carries almost the full cognitive load, but now adds the friction of managing AI interactions: deciding what to ask, evaluating outputs, switching context. More effort. No meaningful benefit. This is what the studies documenting cognitive decline measured.

β-quadratic = 0.102, p < 0.001 — Wang & Zhang (2026), n = 912 students across three continents

Zone 3 — Strategic, Committed Delegation ✓

The learner delegates entire categories of substantive work to AI — all source summarization, a full first-pass literature review, complete data organization. Cognitive savings are large enough to genuinely free working memory.

That freed capacity gets invested in the work AI cannot do: critiquing frameworks, questioning assumptions, constructing original arguments, making judgment calls. This is where the paradox activates and transformative learning lives.

Treating AI as an intellectual partner to evaluate and push back against — rather than a tool — simultaneously activates both critical vigilance (β = 0.335) AND strategic delegation (β = 0.351). Both independently predict transformative learning.

7 Recommendations for Educators & Leaders

  1. 1
    Design for the Arena. Structure AI-integrated assignments so both you and your students are aware of how AI is being used and why. Intentionality — not the tool — is the critical variable.
  2. 2
    Close the Blind Spot through assessment redesign. Move toward process-oriented, portfolio-based, oral, and dialogic assessments that cannot be authentically AI-generated.
  3. 3
    Drain the Façade. Share what you know about cognitive offloading, hallucination rates, detection failure, and passive AI use. What you hold in the Façade is exactly what students need.
  4. 4
    Build AI literacy as a core graduate competency. Teach students to evaluate AI outputs critically, recognize hallucinations, and cite AI use transparently — as foundational as information literacy.
  5. 5
    Invest in mixed feedback models for writing. Combine AI formative feedback with human conceptual critique. Neither alone is sufficient.
  6. 6
    Prioritize equity in every implementation decision. Audit AI tool access. Provide institutional subscriptions. Invest in digital literacy training for first-generation and underserved learners.
  7. 7
    Commission longitudinal research. Begin tracking AI-integrated cohorts over time and publish findings. The field cannot make permanent decisions on a transient evidence base.

Pros & Cons Explorer

Seven dimensions of GenAI's impact, each grounded in empirical research. Toggle between advantages and risks for each dimension.

Research Library

10 peer-reviewed sources (2023–2026) selected for methodological rigor, citation impact, and thematic breadth.

Bring This Research to Your Institution

Inspire Higher Ed works with colleges, universities, and education companies to translate evidence into actionable AI strategy.