2 Matching Annotations
  1. Last 7 days
    1. TLDR: When working with LLMs, the risks for the L&D workflow and its impact on substantive learning are real:Hallucination — LLMs invent plausible-sounding facts that aren’t trueDrift — LLM outputs wander from your brief without clear constraintsGeneric-ness — LLMs surface that which is most common, leading to homogenisation and standardisation of “mediocre”Mixed pedagogical quality — LLMs do not produce outputs which are guaranteed to follow evidence-based practiceMis-calibrated trust — LLMs invite us to read guesswork as dependable, factual knowledge These aren’t edge cases or occasional glitches—they’re inherent to how AI / all LLMs function. Prediction machines can’t verify truth. Pattern-matching can’t guarantee validity. Statistical likelihood doesn’t equal quality.

      Real inherent issue using AI for learning.

    2. AI’s instructional design “expertise” is essentially a statistical blend of everything ever written about learning—expert and amateur, evidence-based and anecdotal, current and outdated. Without a structured approach, you’re gambling on which patterns the model draws from, with no guarantee of pedagogical validity or factual accuracy.

      Issue with applying general LLMs to instructional design