3 Matching Annotations
  1. Last 7 days
    1. TLDR: When working with LLMs, the risks for the L&D workflow and its impact on substantive learning are real:Hallucination — LLMs invent plausible-sounding facts that aren’t trueDrift — LLM outputs wander from your brief without clear constraintsGeneric-ness — LLMs surface that which is most common, leading to homogenisation and standardisation of “mediocre”Mixed pedagogical quality — LLMs do not produce outputs which are guaranteed to follow evidence-based practiceMis-calibrated trust — LLMs invite us to read guesswork as dependable, factual knowledge These aren’t edge cases or occasional glitches—they’re inherent to how AI / all LLMs function. Prediction machines can’t verify truth. Pattern-matching can’t guarantee validity. Statistical likelihood doesn’t equal quality.

      Real inherent issue using AI for learning.

    2. general-assistance Large Language Models (LLMs) -- tools like ChatGPT, Copilot, Gemini and Claude (Taylor & Vinauskaitė, 2025).

      General assistance Large Language Models - work on "patterns and predictions - what is most statistically likely to come next, not what is optimal"-------Lack of true understanding is a real issue!