1 Matching Annotations
  1. Last 7 days
    1. An agent cannot be held accountable. I think about this principle most. The instinct to put a human in the loop is understandable, but taken literally, it can mean a person approving every step before anything moves forward. The human becomes a bottleneck, rubber-stamping work rather than directing it, and you lose much of what makes agents valuable in the first place.

      大多数人认为在AI系统中加入人类审批环节是确保问责制的必要措施,但作者认为这会使人类成为瓶颈,削弱代理的价值。这一观点挑战了AI安全与问责的主流思维,提出了一个非传统的责任分配模式。