2 Matching Annotations
  1. Last 7 days
    1. We argue that this gap stems from a fundamental failure of introspective consistency: AR models agree with what they generate, whereas DLMs often do not.

      这是一个令人惊讶的深刻见解,揭示了扩散语言模型(DLMs)与自回归模型(AR)之间性能差距的根本原因。作者提出'内省一致性'概念,指出AR模型天生具有与自身生成内容一致的特性,而DLMs缺乏这种自我验证能力,这为理解DLMs的局限性提供了全新视角。

  2. Jun 2020