4 Matching Annotations
  1. Jun 2023
    1. https://web.archive.org/web/20230613121025/https://www.workfutures.io/p/note-what-do-we-do-when-we-cant-predict

      Stowe says the 'unpredictability' e.g. investors see comes down that there's no way to assess risk in the global network created complexity. Points to older piece on uncertainty risk and ambiguity. https://www.sunsama.com/blog/uncertainty-risk-and-ambiguity explore.

      I would say that in complexity you don't try to predict the future, as that is based on linear causal chains of the knowable an known realms, you try to probe the future, running multiple small probes (some contradictory) and feed those that yield results.

    1. https://web.archive.org/web/20230609140440/https://techpolicy.press/artificial-intelligence-and-the-ever- receding-horizon-of-the-future/

      Via Timnit Gebru https://dair-community.social/@timnitGebru/110498978394074048

    2. In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.

      Proximate Future is a future that is 'nearly here' but never quite gets here. Ref posits this is a way to distract from issues around a tech now and thus lets technologists dodge responsibility and accountability for the now, as everyone debates the issues of a tech in the near future. It allows the technologists to set the narrative around the tech they develop. Ref: [[Divining a Digital Future by Paul Dourish Genevieve Bell]] 2010

      Vgl the suspicious call for reflection and pause wrt AI by OpenAI's people and other key players. It's a form of [[Ethics futurising dark pattern 20190529071000]]

      It may not be a fully intentional bait and switch all the time though: tech predictions, including G hypecycle put future key events a steady 10yrs into the future. And I've noticed when it comes to open data readiness and before that Knowledge management present vs desired [[Gap tussen eigen situatie en verwachting is constant 20071121211040]] It simply seems a measure of human capacity to project themselves into the future has a horizon of about 10yrs.

      Contrast with: adjacent possible which is how you make your path through [[Evolutionair vlak van mogelijkheden 20200826185412]]. Proximate Future skips actual adjacent possibles to hypothetical ones a bit further out.

    3. Looking to the “proximate future,” even one as dark and worrying as AI’s imagined existential threat, has some strategic value to those with interests and investments in the AI business: It creates urgency, but is ultimately unfalsifiable.

      Proximate future wrt AI creates a fear (always useful dark patterns wrt forcing change or selling something) that always remains unfalsifiable. Works the other way around to, as stalling tactic (tech will save us). Same effect.