3 Matching Annotations
  1. Feb 2024
    1. Broderick makes a more important point: AI search is about summarizing web results so you don't have to click links and read the pages yourself. If that's the future of the web, who the fuck is going to write those pages that the summarizer summarizes? What is the incentive, the business-model, the rational explanation for predicting a world in which millions of us go on writing web-pages, when the gatekeepers to the web have promised to rig the game so that no one will ever visit those pages, or read what we've written there, or even know it was us who wrote the underlying material the summarizer just summarized? If we stop writing the web, AIs will have to summarize each other, forming an inhuman centipede of botshit-ingestion. This is bad news, because there's pretty solid mathematical evidence that training a bot on botshit makes it absolutely useless. Or, as the authors of the paper – including the eminent cryptographer Ross Anderson – put it, "using model-generated content in training causes irreversible defects"

      Broderick: https://www.garbageday.email/p/ai-search-doomsday-cult, Anderson: https://arxiv.org/abs/2305.17493

      AI search hides the authors of the material it presents, summarising it is abstracting away the authors. It doesn't bring readers to those authors, it just presents a summary to the searcher as end result. Take it or leave it. At the same time, if one searches for something you know about, you see those summaries are always of. Leaving you guessing how of it is when searching something you don't know about. Search should never be the endpoint, always a starting point. I think that is my main aversion against AI search tools. Despite those clamoring 'it will get better over time' I don't think it will easily because the tool nor its makers have any interest in the quality of output necessarily and definitely can't assess it. So what's next, humans factchecking AI output. Why not prevent bs at its source? Nice ref to Maggie Appleton's centipede metaphor in [[The Expanding Dark Forest and Generative AI]]

  2. Jan 2024
    1. political situation in Gabon, where the mere possibility of a video being a deepfake created confusion and facilitated political deception, even without deepfake technology. This scenario perfectly illustrates a DoFA: the overwhelming doubt and uncertainty, fuelled by too much unverified or manipulative information, effectively 'denied' the public's ability to discern truth and respond appropriately.

      there doesn't need to be an info-attack, merely suggesting it may have the same impact as it raises suspicion of all information going around.

  3. Oct 2023
    1. It’s likely that some facsimile of Twitter will exist, far into the future. But a seismic shift in how the platform is perceived has occurred. If it isn’t good for breaking news, then what good is it? Perhaps it’s not a force for good at all.

      This is the cycle that made Twitter. Real time developments, and another was the interaction/access dynamic between politicians and journalists. A very visible sign of that cycle breaking, the utility in a developing crisis/event nullified, is I think a good canary. Because in practice the amount of non-human content, trollfarming on top of the actually low user numbers mean that its heyday reputation was already no longer rightfully worn. I wonder how long the public perception of that cycle existing will lag behind the actuality of it no longer being there.