7 Matching Annotations
  1. Last 7 days
    1. To accept the existential stakes of that prospect while simultaneously treating the next frontier of superweapon proliferation as an ordinary issue of private property betrays a deep confusion about the problem that this moment presents.

      这句话尖锐地指出了当前政策制定中的矛盾:一方面承认AI可能带来的生存风险,另一方面却将其视为普通财产问题。这种不一致性反映了我们对新兴技术威胁的理解与应对措施之间的脱节,暗示需要全新的治理框架。

    1. Legendary AI researchers like Geoffrey Hinton and Yoshua Bengio have similar concerns. Industry leaders like Elon Musk and Sam Altman have also warned about existential dangers from AI.

      令人惊讶的是:不仅是批评者,就连AI领域的传奇研究者如杰弗里·辛顿和约书亚·本吉奥,以及行业领袖如埃隆·马斯克和萨姆·奥特曼,都曾公开警告AI可能带来的生存风险,这表明AI风险担忧并非边缘观点,而是来自领域内部的核心声音。

  2. Oct 2024
  3. Jan 2024
    1. Rational optimism regarding our future, then, is only possible to the extent we can find prior evolutionary steps which are plausibly more improbable than they look. Conversely, without such findings we must consider the possibility that we have yet to pass through a substantial part of the Great Filter. If so, then our prospects are bleak, but knowing this fact may at least help us improve our chances. For example, if our prospects are likely bleak we should search out and take especially seriously any plausible scenarios, such as nuclear war or ecological collapse, which might lead to our future inability to explode across the universe. A long list of such scenarios for concern can be found in [Leslie 96]. Our main data point, the Great Silence, would be telling us that at least one of these scenarios is much more probable than it otherwise looks. With such a warning in hand, we might, for example, take extra care to protect our ecosystems, perhaps even at substantial expense to our economic growth rate. We might be even especially cautious regarding the possibility of world-destroying physics experiments. And we might place a much higher priority on projects like Biosphere 2, which may allow some part of humanity to survive a great disaster.

      Especially note:

      With such a warning in hand, we might, for example, take extra care to protect our ecosystems, perhaps even at substantial expense to our economic growth rate. We might be even especially cautious regarding the possibility of world-destroying physics experiments. And we might place a much higher priority on projects like Biosphere 2, which may allow some part of humanity to survive a great disaster.

  4. Nov 2023
    1. The earlier a serious Manhattan-like project to develop nanotechnology is initiated, the longer it will take to complete, because the earlier you start, the lower the foundation from which you begin. The actual project will then run for longer, and that will then mean more time for preparation: serious preparation only starts when the project starts, and the sooner the project starts, the longer it will take, so the longer the preparation time will be. And that suggests that we should push as hard as we can to get this product launched immediately, to maximize time for preparation.

      for sure?

    1. the Center for the 01:00:29 Study of existential risk dedicated the study and mitigation of risks that could lead to human extinction or civilizational collapse and the interesting thing is that modernity is 01:00:41 not on their list either in fact it's not on the list of any of the agencies that now are dedicated to do this work
      • for: Center for the Study of Existential Risk - excludes modernity

      • Comment

        • Center for the Study of Existential Risk still assumes a modern framework to solve the polycrisis
  5. May 2020