top of page
Search

The Interplay of Learning, Analytics,and Artificial Intelligence in Education: A Vision for Hybrid Intelligence [RG Discussion Notes]

  • Writer: Wen Xin Ng
    Wen Xin Ng
  • Feb 25
  • 3 min read

Updated: Feb 26

Mutlu Cukurova

University College London, United Kingdom


  • The shift in AI in Education should move us away from an emphasis on automation and prediction, and toward a more precise articulation of learning processes.

  • Rather than positioning AI primarily as a system that replaces human judgement or prescribes the next best step, we should see it as a means of making learning more visible and describable in nuanced ways. Ultimately, this trajectory points toward human-centred hybrid intelligence — where AI does not displace human agency, but works in tandem with it to extend, refine and amplify human thinking.

  • Tension: How do we prevent over-measurement, performative schooling, bias amplification and cognitive offloading, while still harnessing AI’s capacity for visibility and scale?


Figure 5. The AIED-HCD conceptual framework for human-AI interaction in education for human competence development – Human cognition extended with AI in tightly coupled hybrid intelligence systems.
Figure 5. The AIED-HCD conceptual framework for human-AI interaction in education for human competence development – Human cognition extended with AI in tightly coupled hybrid intelligence systems.
Figure 6. The impact of three AI conceptualisations on Human Competence Development in the long term – The AIED-HCD conceptual framework
Figure 6. The impact of three AI conceptualisations on Human Competence Development in the long term – The AIED-HCD conceptual framework

Conceptualisations of AI


A. Externalise (Automation)
  • AI replaces human pedagogical tasks (e.g. typical ITS, GenAI tutors → automation of feedback, pacing of learning)

  • Strong evidence for effectiveness in structured domains (e.g. math, language, algebra).

Caveats:

  • Externalisation risks reducing learning to information processing.

  • Doesn’t fully capture affective, social, contextual dimensions.

  • Raises concern about cognitive atrophy if humans over-delegate.

B. Internalise (AI as Models for Thinking)
  • AI as an object to think about learning, not just prediction engines.

  • Value of AI in making learning processes visible / describing learning processes more precisely

    • Multimodal learning analytics - e.g. speech time, gaze, collaboration patterns, etc.

    • Goal: “Clicks to constructs” — make sense of behavioural traces.

  • Not about accurate prediction of future actions, but rather awareness of present processes

    • Focus on visibility, reflection, refining mental models.

  • Supporting awareness, accountability and regulation (self, co-, socially shared).


Figure 4. An example of the transitioning from digital traces to constructs of collaboration.
Figure 4. An example of the transitioning from digital traces to constructs of collaboration.
  • AI may facilitate “meta-pattern recognition” across domains.

  • Agency increases when learners interpret their own data.

    • Students can agree or disagree with AI.

    • Cross-subject habit transfer becomes possible.

Caveats:

  • Effectiveness ≠ Real-World Impact

    • Systems may work in controlled research, but mainstream systems (like SLS) serve wide variability of learners → highly motivated learners benefit most from analytics-heavy systems; less engaged learners may not.

    • Tools are not closed engineering systems, they are part of socio-technical ecosystems → adoption depends on governance, tech infrastructure, pedagogical culture, teacher confidence, teacher workload, ethical safeguards, etc.


    Figure 7. Artificial Intelligence’s three main implication areas for education.
    Figure 7. Artificial Intelligence’s three main implication areas for education.
  • The “Clicks ≠ Traits” Problem

    • Socio-cognitive traits (grit, self-control, motivation) are not directly observable.

    • Behavioural traces (time-on-task, retries, submission timing) ≠ stable traits.

    • Same behaviour can mean anxiety, device access issues, boredom, etc.

  • Possible policy risks:

    • Performative education

    • Measuring everything (excessive)

    • DSA / admissions using behavioural traces

    • Conformity pressure

    • Flattening learner diversity

When we quantify “non-subject markers,” we risk normalising a narrow model of the “ideal learner.”
C. Extend (Hybrid Intelligence)
  • Tightly coupled human–AI systems.

  • High automation + high human agency.

  • AI amplifies judgement; humans steer meaning-making.


Caveats:

  • Hybrid intelligence requires AI literacy.

  • Human must understand enough about AI to critically engage with it.

  • Risk: convergence toward AI output (anchoring effect).


Core Challenges of AI as a Tool to Directly Intervene in T&L


Tensions
  1. Threatened human agency - Is top-down rigid pedagogy (centralised AI with fixed pedagogy) better than distributed autonomy?

  2. Prediction limits in social contexts - How do we avoid the system learning local biases?

  3. Normativity (what counts as “good” learning?)

Figure 3. Main categories of issues related to using AI that directly intervenes in the practice of teaching and learning.
Figure 3. Main categories of issues related to using AI that directly intervenes in the practice of teaching and learning.

Hybrid Intelligence: Who Is It For?


Is hybrid intelligence:

  • For all students?

  • For teachers?

  • For older learners only?


  • Younger students may need fundamentals first (enduring fundamentals - e.g. literacy, numeracy, metacognition).

  • Hybrid intelligence requires self-awareness.

  • May be more suitable for teachers and older students.

  • Teacher AI literacy becomes central.



Amplification of gaps in education system

AI does not create problems from scratch. It amplifies existing gaps:

  • Motivation differences

  • Teacher competency differences

  • Performative pressures

  • Pedagogical weaknesses

Hence urgency:

  • Education systems must intentionally design AI integration.

  • Otherwise, gaps will be increasingly widened.


Comments


bottom of page